X-Git-Url: http://git.shadowcat.co.uk/gitweb/gitweb.cgi?p=scpubgit%2FObject-Remote.git;a=blobdiff_plain;f=lib%2FObject%2FRemote.pm;h=bacf74da872deec7910a40de49e377e6147c77a3;hp=03095a5bdc21655a23e6e1ca514764964425baac;hb=82d78e8964ee937e837fc21acd06fcb7f9701d6a;hpb=d672a9bff87112bf8cfea0a5749e934d4c8c996e diff --git a/lib/Object/Remote.pm b/lib/Object/Remote.pm index 03095a5..bacf74d 100644 --- a/lib/Object/Remote.pm +++ b/lib/Object/Remote.pm @@ -5,11 +5,7 @@ use Object::Remote::Handle; use Object::Remote::Logging qw( :log ); use Module::Runtime qw(use_module); -our $VERSION = '0.002003'; # 0.2.3 - -BEGIN { - Object::Remote::Logging->init_logging; -} +our $VERSION = '0.003006'; # 0.3.6 sub new::on { my ($class, $on, @args) = @_; @@ -31,8 +27,8 @@ sub new { } sub connect { - my ($class, $to) = @_; - use_module('Object::Remote::Connection')->maybe::start::new_from_spec($to); + my ($class, $to, @args) = @_; + use_module('Object::Remote::Connection')->maybe::start::new_from_spec($to, @args); } sub current_loop { @@ -161,7 +157,71 @@ this feature which is disabled by default. See L. Space seperated list of class names to display logs for if logging output is enabled. Default value is "Object::Remote::Logging" which selects all logs generated by Object::Remote. -See L. +See L. + +=back + +=head1 KNOWN ISSUES + +=over 4 + +=item Large data structures + +Object::Remote communication is encapsalated with JSON and values passed to remote objects +will be serialized with it. When sending large data structures or data structures with a lot +of deep complexity (hashes in arrays in hashes in arrays) the processor time and memory requirements +for serialization and deserialization can be either painful or unworkable. During times of +serialization the local or remote nodes will be blocked potentially causing all remote +interpreters to block as well under worse case conditions. + +To help deal with this issue it is possible to configure resource ulimits for a Perl interpreter +that is executed by Object::Remote. See C +for details on the perl_command attribute. + +=item User can starve run loop of execution opportunities + +The Object::Remote run loop is responsible for performing I/O and managing timers in a cooperative +multitasing way but it can only do these tasks when the user has given control to Object::Remote. +There are times when Object::Remote must wait for the user to return control to the run loop and +during these times no I/O can be performed and no timers can be executed. + +As an end user of Object::Remote if you depend on connection timeouts, the watch dog or timely +results from remote objects then be sure to hand control back to Object::Remote as soon as you +can. + +=item Run loop favors certain filehandles/connections + +=item High levels of load can starve timers of execution opportunities + +These are issues that only become a problem at large scales. The end result of these two +issues is quite similiar: some remote objects may block while the local run loop is either busy +servicing a different connection or is not executing because control has not yet been returned to +it. For the same reasons timers may not get an opportunity to execute in a timely way. + +Internally Object::Remote uses timers managed by the run loop for control tasks. Under +high load the timers can be preempted by servicing I/O on the filehandles and execution +can be severely delayed. This can lead to connection watchdogs not being updated or connection +timeouts taking longer than configured. + +=item Deadlocks + +Deadlocks can happen quite easily because of flaws in programs that use Object::Remote or +Object::Remote itself so the C is available. When used the run +loop will periodically update the watch dog object on the remote Perl interpreter. If the +watch dog goes longer than the configured interval with out being updated then it will +terminate the Perl process. The watch dog will terminate the process even if a deadlock +condition has occured. + +=item Log forwarding at scale can starve timers of execution opportunities + +Currently log forwarding can be problematic at large scales. When there is a large +amount of log events the load produced by log forwarding can be high enough that it starves +the timers and the remote object watch dogs (if in use) don't get updated in timely way +causing them to erroneously terminate the Perl process. If the watch dog is not in use +then connection timeouts can be delayed but will execute when load settles down enough. + +Because of the load related issues Object::Remote disables log forwarding by default. +See C for information on log forwarding. =back @@ -175,6 +235,8 @@ mst - Matt S. Trout (cpan:MSTROUT) =head1 CONTRIBUTORS +bfwg - Colin Newell (cpan:NEWELLC) + phaylon - Robert Sedlacek (cpan:PHAYLON) triddle - Tyler Riddle (cpan:TRIDDLE)