Example: the Lazy bridge

The rationale of threaded bridges

The 'bridge' concept was introduced to cope with the ability of the Apache HTTP web server to adopt different multiprocessing models by loading one of the available MPMs (Multi Processing Modules). A bridge's task is to let mod_rivet fit the selected multiprocessing model in the first place. Still separating mod_rivet core functions from the MPM machinery provided also a solution for implementing a flexible and extensible design that enables a programmer to develop alternative approaches to workload and resource management.

The Apache HTTP web server demands its modules to run with any MPM irrespective of its internal architecture and its a general design constrain to make no assumptions about the MPM. This clashes with some requirements of threaded builds of Tcl. First of all Tcl is itself threaded (unless threads are disabled at compile time) and many of the basic Tcl data structures (namely Tcl_Obj) cannot be safely shared among threads. This demands a Tcl interpreters be run on separated threads communicating with the HTTP web server through suitable methods.

Lazy bridge data structures

The lazy bridge was initially developed to outline the basic tasks carried out by each function making a rivet MPM bridge. The lazy bridge attempts to be minimalist but it's nearly fully functional, only a few configuration directives (SeparateVirtualInterps and SeparateChannel) are ignored because fundamentally incompatible. The bridge is experimental but perfectly fit for many applications, for example it's good on development machines where server restarts are frequent.

This is the lazy bridge jump table, as such it defines the functions implemented by the bridge.

RIVET_MPM_BRIDGE {
    NULL,
    Lazy_MPM_ChildInit,
    Lazy_MPM_Request,
    Lazy_MPM_Finalize,
    Lazy_MPM_ExitHandler,
    Lazy_MPM_Interp
};

After the server initialization stage, child processes read the configuration and modules build their own configuration representation. MPM bridges hooks into this stage to store and/or build data structures relevant to their design. A fundamental information built during this stage is the database of virtual hosts. The lazy bridge keeps an array of virtual host descriptor pointers each of them referencing an instance of the following structure.

/* virtual host descriptor */

typedef struct vhost_iface {
    int                 idle_threads_cnt;   /* idle threads for the virtual hosts       */
    int                 threads_count;      /* total number of running and idle threads */
    apr_thread_mutex_t* mutex;              /* mutex protecting 'array'                 */
    apr_array_header_t* array;              /* LIFO array of lazy_tcl_worker pointers   */
} vhost;

A pointer to this data structure array is stored in the bridge status which a basic structure that likely every bridge has to create.

/* Lazy bridge internal status data */

typedef struct mpm_bridge_status {
    apr_thread_mutex_t* mutex;
    int                 exit_command;
    int                 exit_command_status;
    int                 server_shutdown;    /* the child process is shutting down  */
    vhost*              vhosts;             /* array of vhost descriptors          */
} mpm_bridge_status;

By design the bridge must create exactly one instance of mpm_bridge_status and store its pointer in module_globals->mpm. This is usually done at the very beginning of the child init script function pointed by mpm_child_init in the rivet_bridge_table structure. For the lazy bridge this field in the jump table points to Lazy_MPM_ChildInit function

/*
 * -- Lazy_MPM_ChildInit
 * 
 * child process initialization. This function prepares the process
 * data structures for virtual hosts and threads management
 *
 */

void Lazy_MPM_ChildInit (apr_pool_t* pool, server_rec* server)
{
    apr_status_t    rv;
    server_rec*     s;
    server_rec*     root_server = module_globals->server;

    module_globals->mpm = apr_pcalloc(pool,sizeof(mpm_bridge_status));

    /* This mutex is only used to consistently carry out these 
     * two tasks
     *
     *  - set the exit status of a child process (hopefully will be 
     *    unnecessary when Tcl is able again of calling 
     *    Tcl_DeleteInterp safely) 
     *  - control the server_shutdown flag. Actually this is
     *    not entirely needed because once set this flag 
     *    is never reset to 0
     *
     */

    rv = apr_thread_mutex_create(&module_globals->mpm->mutex,
                                  APR_THREAD_MUTEX_UNNESTED,pool);
    ap_assert(rv == APR_SUCCESS);

    /* the mpm->vhosts array is created with as many entries as the number of
     * configured virtual hosts */

    module_globals->mpm->vhosts = 
        (vhost *) apr_pcalloc(pool,module_globals->vhosts_count*sizeof(vhost));
    ap_assert(module_globals->mpm->vhosts != NULL);

    /*
     * Each virtual host descriptor has its own mutex controlling
     * the queue of available threads
     */
     
    for (s = root_server; s != NULL; s = s->next)
    {
        int                 vh;
        apr_array_header_t* array;
        rivet_server_conf*  rsc = RIVET_SERVER_CONF(s->module_config);

        vh = rsc->idx;
        rv = apr_thread_mutex_create(&module_globals->mpm->vhosts[vh].mutex,
                                      APR_THREAD_MUTEX_UNNESTED,pool);
        ap_assert(rv == APR_SUCCESS);
        array = apr_array_make(pool,0,sizeof(void*));
        ap_assert(array != NULL);
        module_globals->mpm->vhosts[vh].array = array;
        module_globals->mpm->vhosts[vh].idle_threads_cnt = 0;
        module_globals->mpm->vhosts[vh].threads_count = 0;
    }
    module_globals->mpm->server_shutdown = 0;
}

Handling Tcl's exit core command

Most of the fields in the mpm_bridge_status are meant to deal with the child exit process. Rivet supersedes the Tcl core's exit function with a ::rivet::exit function and it does so in order to curb the effects of the core function that would force a child process to immediately exit. This could have unwanted side effects, like skipping the execution of important code dedicated to release locks or remove files. For threaded MPMs the abrupt child process termination could be even more disruptive as all the threads will be deleted without warning.

The ::rivet::exit implementation calls the function pointed by mpm_exit_handler which is bridge specific. Its main duty is to take the proper action in order to release resources and force the bridge controlled threads to exit.

[Note]Note
Nonetheless the exit command should be avoided in ordinary mod_rivet programming. We cannot stress this point enough. If your application must bail out for some reason focus your attention on the design to find the most appropriate route to exit and whenever possible avoid calling exit at all (which basically wraps a C call to Tcl_Exit). Anyway the Rivet implementation partially transforms exit in a sort of special ::rivet::abort_page implementation whose eventual action is to call the Tcl_Exit library call. See the exit command for further explanations.

Both the worker bridge and lazy bridge implementations of mpm_exit_handler call the function pointed by mpm_finalize which also the function called by the framework when the web server shuts down. See these functions' code for further details, they are very easy to read and understand

HTTP request processing with the lazy bridge

Requests processing with the lazy bridge is done by determining for which virtual host a request was created. The rivet_server_conf structure keeps a numerical index for each virtual host. This index is used to reference the virtual host descriptor and from it the request handler tries to gain lock on the mutex protecting the array of lazy_tcl_worker structure pointers. Each instance of this structure is a descriptor of a thread created for a specific virtual host; threads available for processing have their descriptor on that array and the handler callback will pop the first lazy_tcl_worker pointer to signal the thread there is work to do for it. This is the lazy_tcl_worker structure

/* lazy bridge Tcl thread status and communication variables */

typedef struct lazy_tcl_worker {
    apr_thread_mutex_t* mutex;
    apr_thread_cond_t*  condition;
    int                 status;
    apr_thread_t*       thread_id;
    server_rec*         server;
    request_rec*        r;
    int                 ctype;
    int                 ap_sts;
    int                 nreqs;
    rivet_server_conf*  conf;               /* rivet_server_conf* record   */
} lazy_tcl_worker;

The server field is assigned with the virtual host server record. Whereas the conf field keeps the pointer to a run time computed rivet_server_conf. This structure may change from request to request because the request configuration changes when the URL may refer to directory specific configuration in <Directory ...>...</Directory> blocks

The Lazy bridge will not start any Tcl worker thread at server startup, but it will wait for requests to come in and then if worker threads are sitting on a virtual host queue a thread's lazy_tcl_worker structure pointer is popped and the request handed to it. If no available thread is on the queue a new worker thread is created. The code in the Lazy_MPM_Request easy to understand and shows how this is working

/* -- Lazy_MPM_Request
 *
 * The lazy bridge HTTP request function. This function 
 * stores the request_rec pointer into the lazy_tcl_worker
 * structure which is used to communicate with a worker thread.
 * Then the array of idle threads is checked and if empty
 * a new thread is created by calling create_worker
 */

int Lazy_MPM_Request (request_rec* r,rivet_req_ctype ctype)
{
    lazy_tcl_worker*    w;
    int                 ap_sts;
    rivet_server_conf*  conf = RIVET_SERVER_CONF(r->server->module_config);
    apr_array_header_t* array;
    apr_thread_mutex_t* mutex;

    mutex = module_globals->mpm->vhosts[conf->idx].mutex;
    array = module_globals->mpm->vhosts[conf->idx].array;
    apr_thread_mutex_lock(mutex);

   /* This request may have come while the child process was 
    * shutting down. We cannot run the risk that incoming requests 
    * may hang the child process by keeping its threads busy, 
    * so we simply return an HTTP_INTERNAL_SERVER_ERROR. 
    * This is hideous and explains why the 'exit' commands must 
    * be avoided at any costs when programming with mod_rivet
    */

    if (module_globals->mpm->server_shutdown == 1) {
        ap_log_rerror(APLOG_MARK, APLOG_ERR, APR_EGENERAL, r,
                      MODNAME ": http request aborted during child process shutdown");
        apr_thread_mutex_unlock(mutex);
        return HTTP_INTERNAL_SERVER_ERROR;
    }

    /* If the array is empty we create a new worker thread */

    if (apr_is_empty_array(array))
    {
        w = create_worker(module_globals->pool,r->server);
        (module_globals->mpm->vhosts[conf->idx].threads_count)++; 
    }
    else
    {
        w = *(lazy_tcl_worker**) apr_array_pop(array);
    }

    apr_thread_mutex_unlock(mutex);
    
    apr_thread_mutex_lock(w->mutex);
    w->r        = r;
    w->ctype    = ctype;
    w->status   = init;
    w->conf     = conf;
    apr_thread_cond_signal(w->condition);

    /* we wait for the Tcl worker thread to finish its job */

    while (w->status != done) {
        apr_thread_cond_wait(w->condition,w->mutex);
    } 
    ap_sts = w->ap_sts;

    w->status = idle;
    w->r      = NULL;
    apr_thread_cond_signal(w->condition);
    apr_thread_mutex_unlock(w->mutex);

    return ap_sts;
}

After a request is processed the worker thread returns its own lazy_tcl_worker descriptor to the array and then waits on the condition variable used to control and synchronize the bridge threads with the Apache worker threads. The worker thread code is the request_processor function

/*
 * -- request_processor
 *
 * The lazy bridge worker thread. This thread prepares its control data and 
 * will serve requests addressed to a given virtual host. Virtual host server
 * data are stored in the lazy_tcl_worker structure stored in the generic 
 * pointer argument 'data'
 * 
 */

static void* APR_THREAD_FUNC request_processor (apr_thread_t *thd, void *data)
{
    lazy_tcl_worker*        w = (lazy_tcl_worker*) data; 
    rivet_thread_private*   private;
    int                     idx;
    rivet_server_conf*      rsc;

    /* The server configuration */

    rsc = RIVET_SERVER_CONF(w->server->module_config);

    /* Rivet_ExecutionThreadInit creates and returns the thread private data. */

    private = Rivet_ExecutionThreadInit();

    /* A bridge creates and stores in private->ext its own thread private
     * data. The lazy bridge is no exception. We just need a flag controlling 
     * the execution and an intepreter control structure */

    private->ext = apr_pcalloc(private->pool,sizeof(mpm_bridge_specific));
    private->ext->keep_going = 1;
    private->ext->interp = Rivet_NewVHostInterp(private->pool,w->server);
    private->ext->interp->channel = private->channel;

    /* The worker thread can respond to a single request at a time therefore 
     * must handle and register its own Rivet channel */

    Tcl_RegisterChannel(private->ext->interp->interp,*private->channel);

    /* From the rivet_server_conf structure we determine what scripts we
     * are using to serve requests */

    private->ext->interp->scripts = 
            Rivet_RunningScripts (private->pool,private->ext->interp->scripts,rsc);

    /* This is the standard Tcl interpreter initialization */

    Rivet_PerInterpInit(private->ext->interp,private,w->server,private->pool);
    
    /* The child initialization is fired. Beware of the terminologic 
     * trap: we inherited from prefork only modules the term 'child'
     * meaning 'child process'. In this case the child init actually
     * is a worker thread initialization, because in a threaded module
     * this is the agent playing the same role a child process plays
     * with the prefork bridge */

    Lazy_RunConfScript(private,w,child_init);

    /* The thread is now set up to serve request within the the 
     * do...while loop controlled by private->keep_going  */

    idx = w->conf->idx;
    apr_thread_mutex_lock(w->mutex);
    do 
    {
        module_globals->mpm->vhosts[idx].idle_threads_cnt++;
        while ((w->status != init) && (w->status != thread_exit)) {
            apr_thread_cond_wait(w->condition,w->mutex);
        } 
        if (w->status == thread_exit) {
            private->ext->keep_going = 0;
            continue;
        }

        w->status = processing;
        module_globals->mpm->vhosts[idx].idle_threads_cnt--;

        /* Content generation */

        private->req_cnt++;
        private->ctype = w->ctype;

        w->ap_sts = Rivet_SendContent(private,w->r);

        if (module_globals->mpm->server_shutdown) continue;

        w->status = done;
        apr_thread_cond_signal(w->condition);
        while (w->status == done) {
            apr_thread_cond_wait(w->condition,w->mutex);
        } 
 
        /* rescheduling itself in the array of idle threads */
       
        apr_thread_mutex_lock(module_globals->mpm->vhosts[idx].mutex);
        *(lazy_tcl_worker **) apr_array_push(module_globals->mpm->vhosts[idx].array) = w;
        apr_thread_mutex_unlock(module_globals->mpm->vhosts[idx].mutex);

    } while (private->ext->keep_going);
    apr_thread_mutex_unlock(w->mutex);
    
    ap_log_error(APLOG_MARK,APLOG_DEBUG,APR_SUCCESS,w->server,"processor thread orderly exit");
    Lazy_RunConfScript(private,w,child_exit);

    apr_thread_mutex_lock(module_globals->mpm->vhosts[idx].mutex);
    (module_globals->mpm->vhosts[idx].threads_count)--;
    apr_thread_mutex_unlock(module_globals->mpm->vhosts[idx].mutex);

    apr_thread_exit(thd,APR_SUCCESS);
    return NULL;
}

The lazy bridge module_globals->bridge_jump_table->mpm_thread_interp, which is supposed to return the rivet_thread_interp structure pointer relevant to a given request, has a straightforward task to do since by design each thread has one interpreter

rivet_thread_interp* Lazy_MPM_Interp(rivet_thread_private *private,
                                     rivet_server_conf* conf)
{
    return private->ext->interp;
}

As already pointed out running this bridge you get separate virtual interpreters and separate channels by default and since by design each threads gets its own Tcl interpreter and Rivet channel you will not be able to revert this behavior in the configuration with

SeparateVirtualInterps Off
SeparateChannels       Off

which are simply ignored