=item OBJECT_REMOTE_LOG_FORWARDING
-Forward log events from remote connections to the local Perl interpreter. Set to 0 to disable
-this feature which is enabled by default. See L<Object::Remote::Logging>.
+Forward log events from remote connections to the local Perl interpreter. Set to 1 to enable
+this feature which is disabled by default. See L<Object::Remote::Logging>.
=item OBJECT_REMOTE_LOG_SELECTIONS
=back
+=head1 KNOWN ISSUES
+
+=over 4
+
+=item Large data structures
+
+Object::Remote communication is encapsalated with JSON and values passed to remote objects
+will be serialized with it. When sending large data structures or data structures with a lot
+of deep complexity (hashes in arrays in hashes in arrays) the processor time and memory requirements
+for serialization and deserialization can be either painful or unworkable. During times of
+serialization the local or remote nodes will be blocked potentially causing all remote
+interpreters to block as well under worse case conditions.
+
+To help deal with this issue it is possible to configure the ulimits for a Perl interpreter
+that is executed by Object::Remote. See C<Object::Remote::Connection> for details.
+
+=item User can starve run loop of execution opportunities
+
+The Object::Remote run loop is responsible for performing I/O and managing timers in a cooperative
+multitasing way but it can only do these tasks when the user has given control to Object::Remote.
+There are times when Object::Remote must wait for the user to return control to the run loop and
+during these times no I/O can be performed and no timers can be executed.
+
+As an end user of Object::Remote if you depend on connection timeouts, the watch dog or timely
+results from remote objects then be sure to hand control back to Object::Remote as soon as you
+can.
+
+=item Run loop favors certain filehandles/connections
+
+=item High levels of load can starve timers of execution opportunities
+
+These are issues that only become a problem at large scales. The end result of these two
+issues is quite similiar: some remote objects may block while the local run loop is either busy
+servicing a different connection or is not executing because control has not yet been returned to
+it. For the same reasons timers may not get an opportunity to execute in a timely way.
+
+Internally Object::Remote uses timers managed by the run loop for control tasks. Under
+high load the timers can be preempted by servicing I/O on the filehandles and execution
+can be severely delayed. This can lead to connection watchdogs not being updated or connection
+timeouts taking longer than configured.
+
+=item Deadlocks
+
+Deadlocks can happen quite easily because of flaws in programs that use Object::Remote or
+Object::Remote itself so the C<Object::Remote::WatchDog> is available. When used the run
+loop will periodically update the watch dog object on the remote Perl interpreter. If the
+watch dog goes longer than the configured interval with out being updated then it will
+terminate the Perl process. The watch dog will terminate the process even if a deadlock
+condition has occured.
+
+=item Log forwarding at scale can starve timers of execution opportunities
+
+Currently log forwarding can be problematic at large scales. When there is a large
+amount of log events the load produced by log forwarding can be high enough that it starves
+the timers and the remote object watch dogs (if in use) don't get updated in timely way
+causing them to erroneously terminate the Perl process. If the watch dog is not in use
+then connection timeouts can be delayed but will execute when load settles down enough.
+
+Because of the load related issues Object::Remote disables log forwarding by default.
+See C<Object::Remote::Logging> for information on log forwarding.
+
+=back
+
=head1 SUPPORT
IRC: #web-simple on irc.perl.org
Object::Remote::Connection - An underlying connection for L<Object::Remote>
-=head1 LAME
+ use Object::Remote;
+
+ my %opts = (
+ nice => '10', ulimit => '-v 400000',
+ watchdog_timeout => 120, stderr => \*STDERR,
+ );
+
+ my $local = Object::Remote->connect('-');
+ my $remote = Object::Remote->connect('myserver', nice => 5);
+ my $remote_user = Object::Remote->connect('user@myserver', %opts);
+ my $local_sudo = Object::Remote->connect('user@');
+
+ #$remote can be any other connection object
+ my $hostname = Sys::Hostname->can::on($remote, 'hostname');
+
+=head1 DESCRIPTION
+
+This is the class that supports connections to a Perl interpreter that is executed in a
+different process. The new Perl interpreter can be either on the local or a remote machine
+and is configurable via arguments passed to the constructor.
+
+=head1 ARGUMENTS
+
+=over 4
+
+=item nice
+
+If this value is defined then it will be used as the nice value of the Perl process when it
+is started. The default is the undefined value and will not nice the process.
+
+=item stderr
+
+If this value is defined then it will be used as the file handle that receives the output
+of STDERR from the Perl interpreter process and I/O will be performed by the run loop in a
+non-blocking way. If the value is undefined then STDERR of the remote process will be connected
+directly to STDERR of the local process with out the run loop managing I/O. The default value
+is undefined.
+
+There are a few ways to use this feature. By default the behavior is to form one unified STDERR
+across all of the Perl interpreters including the local one. For small scale and quick operation
+this offers a predictable and easy to use way to get at error messages generated anywhere. If
+the local Perl interpreter crashes then the remote Perl interpreters still have an active STDERR
+and it is possible to still receive output from them. This is generally a good thing but can
+cause issues.
+
+When using a file handle as the output for STDERR once the local Perl interpreter is no longer
+running there is no longer a valid STDERR for the remote interpreters to send data to. This means
+that it is no longer possible to receive error output from the remote interpreters and that the
+shell will start to kill off the child processes. Passing a reference to STDERR for the local
+interpreter (as the SYNOPSIS shows) causes the run loop to manage I/O, one unified STDERR for
+all Perl interpreters that ends as soon as the local interpreter process does, and the shell will
+start killing children when the local interpreter exits.
+
+It is also possible to pass in a file handle that has been opened for writing. This would be
+useful for logging the output of the remote interpreter directly into a dedicated file.
+
+=item ulimit
+
+If this string is defined then it will be passed unmodified as the arguments to ulimit when
+the Perl process is started. The default value is the undefined value and will not limit the
+process in any way.
+
+=item watchdog_timeout
+
+If this value is defined then it will be used as the number of seconds the watchdog will wait
+for an update before it terminates the Perl interpreter process. The default value is undefined
+and will not use the watchdog. See C<Object::Remote::Watchdog> for more information.
+
+=back
+
+=head1 SEE ALSO
+
+=over 4
+
+=item C<Object::Remote>
-Shipping prioritised over writing this part up. Blame mst.
+=back
=cut