code snippets

Block caching with pagers

Drupal 6 and future core versions come with block caching. There are several constants you can set to determine how it is cached from none to per page, per role etc which then can hugely improve the performance on your site for both anonymous and authenticated users. But the cache also needs to be rebuild when there has been a total wipe of the page and block cache. Per page caching is an interesting one, but in some use cases leads to a lot of entries which you sometimes don't need.

An example: with faceted search (either with Apache Solr or with the Faceted search module), you can put a block in a sidebar giving you a list of facets to browse through results. Setting caching per page makes sense because the block contents will change depending on the facet you have clicked on. However, what with pagers ? The block will always have the same facet list, but will have a cache entry for each block because request_uri() will be different. We can avoid that one by having our own logic in a custom block.

This implementation exposes a new block and we deliberately set the cache to BLOCK_NO_CACHE because we will do our own logic in the function that will return the content of the block.

<?php
/**
* Implementation of hook_block().
*/
function test_block($op = 'list', $delta = 0, $edit = array()) {
  if (
$op == 'list') {
    return array(
     
0 => array(
       
'info' => 'Block cache per page with pager',
       
'cache' => BLOCK_NO_CACHE,
      ),
    );
  }
  elseif (
$op == 'view' && $delta == 0) {
    return
test_block_view();
  }
}
?>

The function that returns the content will use request_uri() and parse_url() to get the path that's requested and we will use that path as our own cache_id. We simply ignore all queries that might be available in the URI because we know that the block may have the same content for every page, despite any queries. This is a simple example, but you can add more logic depending on your use case of course. We store our entries in the cache_block table because that table is flushed enough so we don't have stale data on our pages.

<?php
/**
* Block callback.
*/
function test_block_view() {
 
$block = array();

 

// Get the URI without any query parameter.
 
$uri = parse_url(request_uri());
 
$cache_id = $uri['path'];

 

// Do we have something in cache ?
 
if ($cache = cache_get($cache_id, 'cache_block')) {
   
$block = $cache->data;
  }
 
// Otherwhise rebuild and cache it.
 
else {
   
$views_data = module_invoke('views', 'block', 'view', 'frontpage-block_1');
   
$block['subject'] = 'Frontpage';
   
$block['content'] = $views_data['content'];
   
cache_set($cache_id, $block, 'cache_block');
  }

  return

$block;
}
?>

Hide the users icon in administration menu

I'm a huge fan of Administration menu, even more than projects like admin, toolbar or possibly others out there. If you want to delete some items from the menu, you need to implement hook_menu_link_alter() and reset the $item. Following snippets removes the users icon which shows the number of anonymous and authenticated users. Simple and powerfull.

<?php
/**
* Implementation of hook_menu_link_alter().
*/
function swentel_menu_link_alter(&$item, $menu) {
  if (
$item['title'] == 'icon_users') {
   
$item = NULL;
  }
}
?>

Random results with Apache Solr and Drupal

The schema.xml that comes with the Drupal Apache Solr module doesn't define the random_* field compared to the default xml included in the apachesolr package. We needed that functionality for a project where we wanted to display 3 blocks which showed random results based on a couple of fields available in the node, in our case the author, title and a cck field. With 300k nodes, a random result was giving a nicer experience instead of seeing the same results coming back over and over. Adding random order is pretty easy in a few simple steps: http://lucene.apache.org/solr/api/org/apache/solr/schema/RandomSortField...

Implementing the tags from that manual did not have a lot of success, however, after some fiddling, following changes in the xml seem to do the trick. Feel free to add comments and suggestions.

   <!-- goes in types -->
    <fieldType name="rand" class="solr.RandomSortField" indexed="true" />

  <!-- goes in fields -->
   <dynamicField name="random*" type="rand" indexed="true" stored="true"/>

After indexing your nodes, try to run following query on your solr admin page:

http://localhost:port/solr/select?q=whatever&morekeyshere&sort=random_127789 desc

Our blocks are defined via hook_block which uses the apachesolr_search_execute() function to launch our query to the solr engine. With the hook_apachesolr_modify_query you can add a sort parameter and you'll get your random results.

<?php
function hook_apachesolr_modify_query(&$query, &$params, $caller) {
  if (
$caller == 'whatever') {
   
$seed = rand(1, 200);
   
$params['qt'] = 'standard';
   
$params['sort'] = 'random_'. $seed .' asc';
  }
}
?>

Apache Solr Spielerei

If you haven't heard of Apache Solr and the integration with Drupal, than you're probably still struggling with the default search shipped with Drupal core. Pity you. Now, this won't be an introduction on the excellent search engine, no, this is a tale about the combination between Apache Solr, node caching and Node displays. Take a look at following snippet:

<?php
/**
* Creme de la creme:
* Put the full node object in the index, so no node_loads are needed for results.
*/
function nd_search_apachesolr_update_index(&$document, $node) {
 
$node->body = $node->content['body']['#value'];
  unset(
$node->content);
 
$document->tm_node = serialize($node);
}
?>

This code lives in nd_search, a small contrib which you can download from the Node Displays Contributions project. This code indexes the complete node object into the Apache Solr engine which you can you use later on either in custom code or on the search results page. Drupal core gives you the freedom to define a custom search function to render the results instead of the default page, which is - IMHO - pretty hard to customize. ND search implements hook_search_page and in combination with the power of ND, we have full control how to render a node per content type, and this without getting any extra data from the database. The code underneath explains it all.

<?php
/**
* Get the serialized version from the node, and unserialize it.
* @param $doc The apache solr document to be converted.
*
* @return Node version from the document.
*/
function _solr_document_to_node($doc) {
 
$node_serialized = $doc['node']->getField('tm_node');
 
$node = unserialize($node_serialized['value']);
  return
$node;
}

/**
* Implementation of hook_search_page().
*/
function apachesolr_search_search_page($results) {
$output = '';

  foreach (

$results as $key => $result) {
   
$node = _solr_document_to_node($result);
   
$node->build_mode = NODE_BUILD_SEARCH_RESULT;
   
$output .= node_view($node);
  }

 

$output .= theme('pager', NULL, 10, 0);

  return

$output;
}
?>

Pretty cool, right? The module also indexes all CCK fiels for you which you can use in custom code if you want to fire custom queries on one of those fields. Following snippet comes from a block where we want to search on a CCK field called 'name'. The result we get back uses the same function to unserialize the node object and after that we call node_view which is altered through the ND module with a custom build mode. Score again!

<?php
  $filter
= 'ss_cck_field_name:swentel';
 
$search_results = apachesolr_search_execute($filter, '', '');
 
$output = '';
  foreach (
$search_results as $key => $result) {
   
$nid = $result['node']->getField('nid');
    if (
$nid['value'] == arg(1)) { // Don't list the same node we're looking at right now.
     
continue;
    }
   
$node = _solr_document_to_node($result);
   
$node->build_mode = 'nd_blocks';
   
$output .= node_view($node, FALSE, FALSE);
  }
  return
$output;
?>

With this power, imagine a search results page with 2 or 3 blocks which doesn't fire any extra queries at the database for extra data. Our ultimate - and probably improbably - dream is to cache all data in apache solr so we don't need to access MySQL anymore. Of course, that's bullocks, but with the project we're currently building (more than 300K nodes to start with) we're pretty sure we'll be able to deliver a nice search experience for our end users.

Note: I'm pretty biased when it comes to the ND project since I'm one of the co-developers , but hey, we're so excited about it and we're planning a lot of new features pretty soon, but more news on that later!

Subscribe to RSS - code snippets

You are here