Skip to Content

Planet Drupal

Syndicate content
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 1 day 19 hours ago

lakshminp.com: The Drupal 8 plugin system - part 1

10 February 2015 - 3:12am

Plugins are swappable pieces of code in Drupal 8. To see how different they are from hooks, let's take an example where we want to create a new field type.

In Drupal 7, this involves:

  1. Providing information about the field

    hook_field_info - describes the field, adds metadata like label, default formatter and widget.

    hook_field_schema - resides in the module's .install file. Specifies how the field data is stored in the database.

    hook_field_validate - validates the field content before it is persisted in the database.

    hook_field_is_empty - criteria which decides when this field is considered "empty".

  2. Describe how the field will be displayed using,

    hook_field_formatter_info - metadata about different types of formatters.

    hook_field_formatter_view - implementation of the formatters defined above, mostly spits out HTML.

  3. Define a widget for the field type.

    hook_field_widget_info - provides information about widgets. For instance, a calendar would be a widget for a date field.

    hook_field_widget_form - implements the widgets.

In Drupal 8, all the 3 steps mentioned above(field info, formatters and widgets) are 3 types of plugins.

  1. Providing information about the field.
    This is done by the FieldType plugin which extends FieldItemBase. The FieldItemBase class has schema, metadata and validation encapsulated which will need to be overriden by the developer.

  2. Field formatter.
    Implemented by extending FormatterBase.
    the viewElements() function needs to be overridden to provide a renderable array for a field value. This is similar to the hook_field_formatter_view in Drupal 7.

  3. Widgets are implemented by subclassing WidgetBase.

There is many more to creating field types than that, but that's all you need to know from a plugins perspective.

What did we gain by changing fields from being a hook based system to plugins based?
For one, the code became more encapsulated. The definition and implementation is wrapped in one place.
Another benefit we reaped by following OO principles is inheritance. Plugins are designed to be extensible. This allows us to subclass similar functionality and achieve one of the most coveted ideals of software engineering, i.e. code reuse.

In the next part, we will look at the concept of plugin discovery and some common plugin types exposed by Drupal 8 core.

Categories: Drupal

Jonathan Brown: Bitcoin transaction forwarding

10 February 2015 - 12:54am

Forwarding of individual Bitcoin transactions to one or more addresses is a new feature in Coin Tools.

Because it spends the outputs from the transaction that is being forwarded instead of making a regular payment from the wallet pool of unspent outputs it is not necessary to wait for confirmations before sending the new transaction. If the original transaction did not ultimately make it onto the blockchain then the forwarding transaction would also not make it onto the blockchain.

However, it is prudent to wait for 1 confirmation due to a problem called transaction malleability. When a transaction is broadcast to the Bitcoin network it has a txid to uniquely identify it. However, a mischievous actor could take the transaction and change something that wouldn't affect the digital signature but would change the txid. If this altered transaction made it onto the blockchain instead of the original version, any subsequent transactions that were referring to the original txid would be invalid.

If this were to occur it wouldn't cause money loss and it could be overcome my detecting it and then reissuing the subsequent transactions. A cleaner solution is to wait for one confirmation before spending the outputs. Once the transaction is on the blockchain it is extremely unlikely that the block would be replaced by one with an altered version of the transaction.

Transaction malleability is due to a bug in the design of Bitcoin. While it cannot now be solved completely, changes to the Bitcoin software have been made to lessen the impact of this problem.

The \Drupal\cointools_daemon\Client::forwardTransaction() method in Coin Tools allows for transactions to be forwarded to multiple addresses. Output addresses have quantities attached to them to define what ratio of the input amount is sent to each address. This is similar to how Coinsplit operates, although they wait for 3 confirmations and may do a general wallet spend instead of forwarding the specific transaction.

<?php
  /**
   * Forwards a bitcoin transaction to one or more addresses with defined
   * proportionality.
   *
   * @param $txid
   *   txid of transaction to forward.
   * @param array $addresses
   *   List of addresses that can be spent from.
   * @param array $outputs
   *   Where to forward the bitcoin to.
   *   Keys are destinations addresses.
   *   Values are proportional quantities.
   * @param bool $tx_confirm_target
   *   The fee should be calculated for the transaction to reach the blockchain
   *   after this many blocks. default = 1
   *
   * @return string
   *   txid of the forwarding transaction.
   */
  public function forwardTransaction($txid, array $addresses, array $outputs, $tx_confirm_target = 1) {
    $transaction_in = $this->transactionLoad($txid);
    // Find inputs and amount for new transaction.
    $inputs = [];
    $transaction_amount = 0;
    foreach ($transaction_in['vout'] as $vout) {
      // Is this output for one of our addresses?
      if (in_array($vout['scriptPubKey']['addresses'][0], $addresses)) {
        // Has this output been spent yet?
        try {
          $this->request('gettxout', [$txid, $vout['n']]);
          $inputs[] = [
            'txid' => $txid,
            'vout' => $vout['n'],
          ];
          $transaction_amount += CoinTools::bitcoinToSatoshi($vout['value']);
        }
        catch (\Exception $e) {}
      }
    }
    // Remove the miner fee from the transaction amount.
    $transaction_amount -= $this->transactionEstimateFee(count($inputs), count($outputs));
    // Divide up the pie according to the correct proportions.
    $ratio = $transaction_amount / array_sum($outputs);
    foreach ($outputs as $address => &$amount) {
      $amount *= $ratio;
      // Eliminate dust outputs.
      if ($amount < 546) {
        unset($outputs[$address]);
        continue;
      }
      $amount = CoinTools::satoshiToBitcoin($amount);
    }
    // Make sure there is something to send.
    if (empty($outputs)) {
      throw new \Exception("No bitcoin to send.");
    }
    // Send the transaction.
    return $this->transactionSendNew($inputs, $outputs);
  }
?>

What are the use cases of transaction forwarding?

Splitting donations among different parties

If a band was getting paid in Bitcoin it could be configured what percentage each band member would receive.

Affiliate marketing

When a sale is made, a third party who provided the lead for the sale could receive part of the funds spent.

Donate percent of revenue to charity

A company could provably show that they are donating a certain percentage of their revenue to a charity. If the funds are always forwarded to the same address for the charity, then a customer can observe the blockchain and check that the correct proportion of their money went to the right place.

Getting funds off an insecure platform

As I mentioned in an earlier blog post, holding funds in a hot-wallet on a server is not very secure. Immediately forwarding the transactions out of harms way alleviates this problem.

I have added support for this to Coin Tools payments. If the forwarding address is set in the payment type, payments will be forwarded to it after 1 confirmation.

An example of this can be seen on the blockchain.

Categories: Drupal

DrupalCon News: DrupalCon LA 2015 - get involved with the Coding and Development track

9 February 2015 - 11:14am

Implements hook_awesome() has been deprecated, use $Drupal->awesome().

Categories: Drupal

Tag1 Consulting: When All Else Fails, Reflect on the Fail

9 February 2015 - 9:49am

While coding the MongoDB integration for Drupal 8 I hit a wall first with the InstallerKernel which was easy to remedy with a simple core patch but then a similar problem occurred with the TestRunnerKernel and that one is not so simple to fix: these things were not made with extensibility in mind. You might hit some other walls -- the code below is not MongoDB specific. But note how unusual this is: you won’t hit similar problems often. Drupal 8 very extensible but it has its limits.

read more

Categories: Drupal

Dcycle: Drupal and Docker: Creating a new Docker image based on an existing image

9 February 2015 - 7:26am

To get the most of this blog post, please read and understand Getting Started with Docker (Servers for Hackers, 2014/03/20). Also, all the steps outlined here have been done on a Vagrant CoreOS virtual machine (VM).

I recently needed a really simple non-production Drupal Docker image on which I could run tests. d7alt/drupal (which you can find by typing docker search drupal, or on GitHub) worked for my needs, except that it did not have the cUrl php library installed, so drush en simpletest -y was throwing an error.

Therefore, I decided to create a new Docker image which is based on d7alt/drupal, but with the php5-curl library installed.

I started by creating a new local directory (on my CoreOS VM), which I called docker-drupal:

mkdir docker-drupal

In that directory, I created Dockerfile which takes d7alt/drupal as its base, and runs apt-get install curl.

FROM b7alt/drupal RUN apt-get update RUN apt-get -y install curl

(You can find this code at my GitHub account at alberto56/docker-drupal.)

When you run this you will get:

docker build . ... Successfully built 55a8c8999520

That hash is a Docker image ID, and your hash might be different. You can run it and see if it works as expected:

docker run -d 55a8c8999520 c9a98bdcab4e027e8571bde71ee92b4380247a44ef9314749ef5680864de2928

In the above, we are telling Docker to create a container based on the image we just created (55a8c8999520). The resulting container hash is displayed (yours might be different). We are using -d so that our containers runs in the background. You can see that the container is actually running by typing:

docker ps CONTAINER ID IMAGE COMMAND... c9a98bdcab4e 55a8c8999520 "/usr/bin/supervisor...

This tells you that there is a running container (c9a98bdcab4e) based on the image 55a8c8999520. Again, your hases will be different. Let's log into that container now:

docker exec -it c9a98bdcab4e bash root@c9a98bdcab4e:/#

To make sure that cUrl is successfully installed, I will figure out where Drupal resides on this container, and then try to enable Simpletest. If that works, I will consider my image a success, and exit from my container:

root@c9a98bdcab4e:/# find / -name 'index.php' /srv/drupal/www/index.php root@c9a98bdcab4e:/# cd /srv/drupal/www root@c9a98bdcab4e:/srv/drupal/www# drush en simpletest -y The following extensions will be enabled: simpletest Do you really want to continue? (y/n): y simpletest was enabled successfully. [ok] root@c9a98bdcab4e:/srv/drupal/www# exit exit

Now I know that my 55a8c8999520 image is good for now and for my purposes; I can create an account on Docker.com and push it to my account for later use:

Docker build -t alberto56/docker-drupal . docker push alberto56/docker-drupal

Anyone can now run this Docker image by simply typing:

docker run alberto56/docker-drupal

One thing I had a hard time getting my head around was having a GitHub project and Docker project, and both are different but linked. The GitHub project is the the recipe for creating an image, whereas the Docker project is the image itself.

One we start thinking of our environments like this (as entities which should be versioned and shared), the risk of differences between environments is greatly reduced. I was used to running simpletests for my projects on an environment which is managed by hand; when I got a strange permissions error on the test environment, I decided to start using Docker and version control to manage the container where tests are run.

Tags: blogplanet
Categories: Drupal

Gábor Hojtsy: Second Drupal core committer passes the Acquia Certified Developer exam

9 February 2015 - 6:41am

I finally stopped putting it off and took the opportunity to test myself on the Acquia Certified Developer exam. To be honest I put it off for quite long. As a household name in the community I had fears it will prove I am not good enough and funnily enough, I did worst on back end development (ooops!) and 10% better on site building. My overall result is actually the same as Angie Byron at 85%. I'm flawless with fundamental web concepts at least. Ha!

As a computer science major who transferred into more of a mix of development, leadership, events and content production, I don't have much of an experience with tech certification exams. My only encounter was with the CIW certifications 13 or so years ago, which I took back in the day to be able to teach the CIW courses at a local private school. Judging from that experience and common wisdom, I expected paperbook style questions where I need to know the order and name of arguments and options on l() as well recite row styles of views and available options of date fields. The reality cannot be farther from that.

Categories: Drupal

Drupalize.Me: Guide to Drupal 8 at DrupalCon Bogotá

9 February 2015 - 6:00am

Are you lucky enough to attend DrupalCon Latin America in Bogotá, Columbia? Excited to learn more about Drupal 8 in particular? If so, put these Drupal 8-related sessions on your radar. If you'll be watching from home, keep an eye on the Drupal Association's YouTube channel for session recordings. Here's the Drupalize.Me guide to Drupal 8 at DrupalCon Latin America 2015.

Categories: Drupal

Code Enigma: Testing Frameworks - an Exploration

9 February 2015 - 3:03am
Testing Frameworks - an Exploration Language English Testing Frameworks - an Exploration

In this blog post, I walk through an automated testing setup, using Behat, Mink and Selenium.

9th February 2015By jamie

Many months ago, a discussion between my colleague Chris Maiden and I sparked off an idea that we developed into an automated testing framework, which is capable of taking any assertions from user stories, and running them either as unit tests against code, or as functional tests against a staged or development web interface.

There are two main reasons for developing this. The first and simplest is that our clients want it, and we like to help our clients. The second is that it makes sense to have a set of standardised ‘Drupal’ web tests which we can execute in part or full against builds, to ensure that we’re not breaking critical behaviour. A library of standard tests would be beneficial on any current or future projects.

Enter Behat

Behat is a testing framework for PHP, based on work done for Ruby (which resulted in a project called Cucumber). Since there’s a natural link to long, green vegetables, the syntax shared by both behat and cucumber is called Gherkin - no I’m not sure where this came from.

Essentially, Behat exists to make unit tests easy. The gherkin syntax is fairly close to natural language (as scripting syntax goes). An example:

 

# features/search.feature Feature: Search In order to see a word definition As a website user I need to be able to search for a word Scenario: Searching for a page that does exist Given I am on "/wiki/Main_Page" When I fill in "search" with "Behavior Driven Development" And I press "searchButton" Then I should see "agile software development"

 

You can probably follow this. The awesome thing about Behat is that it takes this syntax (this is actually valid) and runs tests. The orange bits are keywords. The green bits are variables. Within the scenario block, the bits between keywords and variables actually map to PHP functions in the background, which are stored within context classes.

Alone, Behat would be used to run unit tests, with contexts providing accessor functions for PHP units, which provide assertions to test. As useful as this may be, it’s arguably not always the best way to test a website, which is where Mink comes into play. Mink is effectively an abstraction layer, which lets Behat talk to different functional testing frameworks. It provides a handful of web oriented contexts, but most importantly, it executes gherkin syntax test cases in browser emulators, or live browser sessions, using any framework you’d care to name! In our case, we’ve used Mink’s default settings to plug into gouette (a ‘headless browser’ implementation) and selenium (a framework which allows tests to be run directly through the browser).

So this is all great. We can use gouette for all test instances where we do not require an actual browser. However, some of our functional tests are going to require Javascript to run, and in those cases (at least) we’ll need Selenium.

Watching Selenium in action is a little bizarre. When you run tests, it literally opens up a browser, and starts clicking on things, typing into search boxes, going to URLs you’ve specified. But having seen it run, you begin to wonder how it would ever work on a server, running automated tests post-build. In most cases, our servers are running a linux distribution which doesn’t even have a desktop GUI installed, let alone a browser, but there’s a trick to getting this to work; a little package called Xvfb. It stands for X-windows Virtual Frame Buffer, and gives you a ‘virtual’ desktop GUI in which you can run standard applications, which would otherwise require a desktop. You can’t see it, because it doesn’t actually have a front end, but your desktop applications run in it, and return whatever output they’ve been designed to return.

Setting it up: Behat, Mink & Selenium

The default, and by far the easiest way to install Behat is to use Composer - which is a tool employed by Symfony (and other PHP projects) to resolve dependencies in a graceful and automated way.

In the root of our project, we’ve created a behat/ subdirectory, and in it created a file called composer.json, which contains the following JSON structure:

{ "require": { "behat/mink": "*", "behat/mink-goutte-driver": "*", "behat/mink-selenium2-driver": "*", } }

Having saved this, we run the following commands from the terminal, in our behat/ subdirectory:

$ curl http://getcomposer.org/installer | php $ php composer.phar install

This will download a file called composer.phar - a compressed PHP application which will download and install all the dependencies, plus the Behat, Mink, Gouette and Selenium packages required for this exercise. Once present, they’ll live in the behat/vendor/ subdirectory.

There’s a little more work to do before we can test this out. First of all we’ll need to initialise a behat test project. From our behat/ subdirectory we run:

$ bin/behat --init

This will create a project structure for us, which will consist of a behat.yml file, and a behat/features/ subdirectory, which is where our tests will live. By default, behat.yml is not set up to autoload all of Mink’s extra classes within feature tests, so this will need to be edited so that it looks like this:

# behat.yml default: extensions: Behat\MinkExtension\Extension: goutte: ~ selenium2: ~

this tells Behat that any tests not defined as using a user-defined context, will use the MinkContext class by default, which means they’ll know how to perform all the defined common web-related actions. We can test this is the case by running

$ bin/behat -dl

from within the behat/ subdirectory. This should return a list of contexts which Behat/Mink knows about.

Assuming this all works, next we’re going to need some tests! We’ll save the following in a file at behat/features/search.feature:
 

# features/search.feature Feature: Search In order to see a word definition As a website user I need to be able to search for a word Scenario: Searching for a page that does exist Given I am on "/wiki/Main_Page" When I fill in "search" with "Behavior Driven Development" And I press "searchButton" Then I should see "agile software development" Scenario: Searching for a page that does NOT exist Given I am on "/wiki/Main_Page" When I fill in "search" with "Glory Driven Development" And I press "searchButton" Then I should see "Search results"

 

The astute among us will notice that the feature defined above is checking a wiki’s search functionality. Wikipedia, to be exact. We need to tell Behat which site it is we’re running tests on, which happens back in the behat.yml file:

# behat.yml default: extensions: Behat\MinkExtension\Extension: base_url: http://en.wikipedia.org goutte: ~ selenium2: ~

Once that’s done, we’re ready to run our Behat tests!

As it stands, we can execute these tests by running:

$ bin/behat

in our behat/ subdirectory. By default, Behat will use the gouette driver to run tests, so you won’t see anything spectacular happen - just some test results in your terminal window. Next, we need Selenium.
 

Testing Selenium Locally

First off, to test using Selenium, we’re going to need a Selenium server running. It can be downloaded as a .jar file from http://seleniumhq.org/download/ and running it is a simple matter of getting into whichever directory it’s saved to, and running:

$ java -jar selenium-server-*.jar &

Obviously, we need a JRE installed, and we could replaced the ‘*’ with the specific file name or version number we downloaded.

Once we have Selenium running in the background (hence the &), we need to modify our feature file like so:

@javascript Scenario: Searching for a page that does exist Given I am on "/wiki/Main_Page" When I fill in "search" with "Behavior Driven Development" And I press "searchButton" Then I should see "agile software development"

 

The ‘@javascript’ tag above the Scenario tells Mink to use a driver which supports, predictably, javascript (and therefore will load the test in an actual browser, rather than an emulation of a browser).

If we now run

$ bin/behat

from our behat/ subdirectory, we should see Firefox open up, navigate to the Wikipedia site, and start running our tests. When it’s finished, it will close again, and our test results will be displayed in the terminal.

Xvfb, & Testing Selenium on a headless server

So, having got all the following working, we need to take it a step further. Supposing we want our test framework to run on a server machine - one that’s hosted at Linode, or Rackspace, and which we don’t want to install Gnome, or KDE or even Windows. We could just use the Gouette driver to execute tests, but that doesn’t allow us to test any javascript-rich functionality. We need a way of emulating the desktop environment, and running a browser.

This is where a package called Xvfb comes in. We can find it in most package repositories - my favourite flavour of Linux is usually Debian or Ubuntu, so we’d install it with:

$ sudo apt-get install xvfb

And it can be run with:

$ Xvfb :99 -ac &

This means to run Xvfb on a display we’ve numbered 99, with no user access control, and send it to the background.

We can now start up Selenium, and run our tests on this server (assuming we’ve deployed our tests on this server), but we’ll need to tell Selenium which display to open browser instances on:

$ export DISPLAY=:99 $ java -jar selenium-server-*.jar &

and now we’re good to go. As before, we go to our behat/ subdirectory, and run:

$ bin/behat

We’ll see the same output as we did when running tests in Selenium locally, and we should also get output from Selenium and Xvfb, since they’re running as background tasks in the current shell.

Oh no, my server needs a different setup to my local machine/vm!

When we first committed our Behat framework to a project, pushed it to a stage server, and tried to execute our tests, we found that there were dependency problems. This is largely because when Behat is installed, it uses the Composer framework to check which libraries are needed, downloads them automatically, and includes them from a vendor/ subdirectory which is created as part of the install task.

Since the vendor/ subdir is the only place in which these dependencies are resolved, our solution was to install Behat/Mink locally to the server via composer, somewhere safely away from the project. Then when code was pushed up to our stage server, it’s a simple task to remove the vendor/ subdir from the project, and symlink in one which we’re confident contains all the dependencies for the current server.

Another way we could tackle this problem could be to install Behat/Mink/Selenium/Xvfb on our server as before, and have Behat’s features/ subdir version controlled, and managed by a CI setup, such as Jenkins. That way, our tests would be separate from our project, and could be run as needed without worrying about dependencies at all.
 

Running Selenium/Xvfb as a service

There are a few ways of having Selenium and Xvfb running in the background on our server, without having to run them explicitly from the terminal each time we need them. One way we explored was to have them installed as ‘service’ scripts, from /etc/init.d/ but the simplest and easiest way we opted for was simply to place the following in /etc/rc.local, which executes commands when the machine starts:

/usr/bin/Xvfb -ac :99 & export DISPLAY=:99 java -jar /usr/local/bin/selenium-server-standalone-2.25.0.jar & exit 0

This is enough to have the necessary programs running when we need them, which is especially useful if we’re integrating our test framework with a Continuous Integration framework, such as Jenkins. Which is exactly what we intend to do ;-)

Jenkins, Fabric.py and Continuous Integration

The idea behind this is to have tests run whenever a changeset is pushed to a git branch. As we get out into the realms of CI setups, there are fewer and fewer absolutes, because each CI setup is different depending on who created it, and where and why it’s been set up.

In our case, we’ve added a step to our fabric file, which looks like:

# Run behat tests, if present def run_behat_tests(repo, branch, build): if os.path.isdir(cwd + '/behat') and env.host == 'staging_host_name.codeenigma.com': print "===> Re-linking vendor directory" run("cd /var/www/%s_%s_%s/behat && rm -rf vendor && ln -s /var/www/shared/behat/vendor" % (repo,branch,build)) print "===> Running behat tests..." run("export DISPLAY=:99 && cd /var/www/%s_%s_%s/behat && bin/behat" % (repo,branch,build)

 

All this is doing, is checking to see if a behat/ subdirectory exists, and that the hostname we’re deploying to matches our expectations for the for the environment on which we’d like to run tests, and then if these criteria are met, we remove and then symlink the behat/vendor/ subdirectory to one we know is good for this server, and then run the test suite (having exported the display variable).

Ultimately, a continuous integration setup is going to vary depending on the individual circumstances involved, but building a ‘test’ step in is pretty simple. Behat returns FALSE if tests pass, and an error message if not, so it’s very simple to check for.

It’s also possible to run Behat with command line options to output the results to HTML, so this can be combined with a CI framework to just email the results of any tests, and pass the build regardless - again, it’s down to the individual needs of the project.

(Main banner image: dummies by Greg Westfall)

 

An Introduction to Test Driven DevelopmentBlog Getting started with Test Driven Development - Choosing a Test HarnessBlog How do Code Enigma provide continuous integration?FAQ Meaningful commit messagesBlog
Categories: Drupal

lakshminp.com: Annotations in Drupal 8

9 February 2015 - 12:12am

Annotations are PHP comments which hold metadata about your function or class. They do not directly affect program semantics as they are comment blocks. They are read and parsed at runtime by an annotation engine.

Annotations are already used in other PHP projects for various purposes. Symfony2 uses annotations for specifying routing rules. Doctrine uses them for adding ORM related metadata.Though handy in various situations, their utility is debated about a lot, like:

  1. How to actually differentiate between annotations and actual user comments?

  2. Why put business logic inside comment blocks. Shouldn't they be a part of core language semantics?

  3. Annotations blur the boundary between code and comments. If the developer misses an annotation(Remember, its not a program semantic). It might compile fine, but might not work as expected.

  4. Another closely related gripe is, annotations are hard to test and debug.

Most of the acquisitions are around the concept of annotations implemented as comments in PHP. There is however, a proposal to add it as a first class language feature. In the meantime, we are stuck to using comment based annotations.

Annotations are not all evil. They make it easier to inject behaviour without adding a lot of boilerplate.

Here is an example taken from stackoverflow which shows how annotations can cut a lot of boilerplate code.

Let's say we want to inject weapon object to a Soldier instance.

class Weapon { public function shoot() { print "... shooting ..."; } } class Soldier { private $weapon; public function setWeapon($weapon) { $this->weapon = $weapon; } public function fight() { $this->weapon->shoot(); } }

If the DI is done by hand, then:

$weapon = new Weapon(); $soldier = new Soldier(); $soldier->setWeapon($weapon); $soldier->fight();

We could go a step further and decouple the DI and put it in an external file, like:

Soldier.php

$soldier = Container::getInstance('Soldier'); $soldier->fight(); // ! weapon is already injected

soldierconfig.xml

<class name="Soldier"> <!-- call setWeapon, inject new Weapon instance --> <call method="setWeapon"> <argument name="Weapon" /> </call> </class>

If we use annotations instead:

class Soldier { ... /** * @inject $weapon Weapon */ public function setWeapon($weapon) { $this->weapon = $weapon; }

Annotations also give the additional advantage of having both code and metadata co-located. I think this is another reason why it was decided to use annotations and do away with external configuration files in Drupal 8(at least for plugins).

The Drupal part

Drupal 8 borrows annotations syntax from Doctrine. Drupal 7 had metadata tucked away in info hooks. This involves reading the whole module file into memory for every request. Annotations, on the other hand, are tokenized and parsed and don't incur as much memory requirements as in the hook based approach. Also, docblocks are cached by the opcode cache.

Syntax

Drupal annotations are nested key value pairs very similar to json dumps. There are some gotchas though. You MUST use double quotes for strings and no quotes at all for numbers. Lists are represented by curly brackets and don't have a training comma.
Here's a code dump of annotations in action, from TextDefaultFormatter.php file in text module.

/** * Plugin implementation of the 'text_default' formatter. * * @FieldFormatter( * id = "text_default", * label = @Translation("Default"), * field_types = { * "text", * "text_long", * "text_with_summary", * }, * quickedit = { * "editor" = "plain_text" * } * ) */ class TextDefaultFormatter extends FormatterBase { ...

With so many conveniences, I hope annotations become a first class citizen of PHP pretty soon!

Categories: Drupal

DrupalOnWindows: Getting #2,000 requests per second without varnish

7 February 2015 - 11:00pm
Language English

Now that Varnish is finally "the free version of a propietary software", it's time to look somewhere else, and the answer is right in front of our noses. When the word pripietary starts to appear, let's stick to the big guys. If you want to know how to get #2,000 requests per second without depending on anything else but your current webserver (yes, the same one you are using to serve your pages, no need to setup anything else here) then keep on reading.

More articles...
Categories: Drupal

Daniel Pocock: Lumicall's 3rd Birthday

6 February 2015 - 1:33pm

Today, 6 February, is the third birthday of the Lumicall app for secure SIP on Android.

Happy birthday

Lumicall's 1.0 tag was created in the Git repository on this day in 2012. It was released to the Google Play store, known as the Android Market back then, while I was in Brussels, the day after FOSDEM.

Since then, Lumicall has also become available through the F-Droid free software marketplace for Android and this is the recommended way to download it.

An international effort

Most of the work on Lumicall itself has taken place in Switzerland. Many of the building blocks come from Switzerland's neighbours:

  • The ice4j ICE/STUN/TURN implementation comes from the amazing Jitsi softphone, which is developed in France.
  • The ZORG open source ZRTP stack comes from PrivateWave in Italy
  • Lumicall itself is based on the Sipdroid project that has a German influence, while Sipdroid is based on MjSIP which comes out of Italy.
  • The ENUM dialing logic uses code from ENUMdroid, published by Nominet in the UK. The UK is not exactly a neighbour of Switzerland but there is a tremendous connection between the two countries.
  • Google's libPhoneNumber has been developed by the Google team in Zurich and helps Lumicall format phone numbers for dialing through international VoIP gateways and ENUM.

Lumicall also uses the reSIProcate project for server-side infrastructure. The repro SIP proxy and TURN server run on secure and reliable Debian servers in a leading Swiss data center.

An interesting three years for free communications

Free communications is not just about avoiding excessive charges for phone calls. Free communications is about freedom.

In the three years Lumicall has been promoting freedom, the issue of communications privacy has grabbed more headlines than I could have ever imagined.

On 5 June 2013 I published a blog about the Gold Standard in Free Communications Technology. Just hours later a leading British newspaper, The Guardian, published damning revelations about the US Government spying on its own citizens. Within a week, Edward Snowden was a household name.

Google's Eric Schmidt had previously told us that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place.". This statement is easily debunked: as CEO of a corporation listed on a public stock exchange, Schmidt and his senior executives are under an obligation to protect commercially sensitive information that could be used for crimes such as insider trading.

There is no guarantee that Lumicall will keep the most determined NSA agent out of your phone but nonetheless using a free and open source application for communications does help to avoid the defacto leakage of your conversations to a plethora of marketing and profiling companies that occurs when using a regular phone service or messaging app.

How you can help free communications technology evolve

As I mentioned in my previous blog on Lumicall, the best way you can help Lumicall is by helping the F-Droid team. F-Droid provides a wonderful platform for distributing free software for Android and my own life really wouldn't be the same without it. It is a privilege for Lumicall to be featured in the F-Droid eco-system.

That said, if you try Lumicall and it doesn't work for you, please feel free to send details from the Android logs through the Lumicall issue tracker on Github and they will be looked at. It is impossible for Lumicall developers to test every possible phone but where errors are obvious in the logs some attempt can be made to fix them.

Beyond regular SIP

Another thing that has emerged in the three years since Lumicall was launched is WebRTC, browser based real-time communications and VoIP.

In its present form, WebRTC provides tremendous opportunities on the desktop but it does not displace the need for dedicated VoIP apps on mobile handsets. WebRTC applications using JavaScript are a demanding solution that don't integrate as seamlessly with the Android UI as a native app and they currently tend to be more intensive users of the battery.

Lumicall users can receive calls from desktop users with a WebRTC browser using the free calling from browser to mobile feature on the Lumicall web site. This service is powered by JSCommunicator and DruCall for Drupal.

Categories: Drupal

Dries Buytaert: Growing Drupal in Latin America

6 February 2015 - 12:45pm

When I visited Brazil in 2011, I was so impressed by the Latin American Drupal community and how active and passionate the people are. The region is fun and beautiful, with some of the most amazing sites I have seen anywhere in the world. It also happens to be a strategic region for the project.

Latin American community members are doing their part to grow the project and the Drupal community. In 2014, the region hosted 19 Global Training Day events to recruit newcomers, and community leaders coordinated many Drupal camps to help convert those new Drupal users into skilled talent. Members of the Latin American community help promote Drupal at local technology and Open Source events, visiting events like FISL (7,000+ participants), Consegi (5,000+ participants) and Latinoware (4,500+ participants).

You can see the results of all the hard work in the growth of the Latin American Drupal business ecosystem. The region has a huge number of talented developers working at agencies large and small. When they aren't creating great Drupal websites like the one for the Rio 2016 Olympics, they are contributing code back to the project. For example, during our recent Global Sprint Weekend, communities in Bolivia, Colombia, Costa Rica, and Nicaragua participated and made valuable contributions.

The community has also been instrumental in translation efforts. On localize.drupal.org, the top translation is Spanish with 500 contributors, and a significant portion of those contributors come from the Latin America region. Community members are also investing time and energy translating Drupal educational videos, conducting camps in Spanish, and even publishing a Drupal magazine in Spanish. All of these efforts lower the barrier to entry for Spanish speakers, which is incredibly important because Spanish is one of the top spoken languages in the world. While the official language of the Drupal project is English, there can be a language divide for newcomers who primarily speak other languages.

Last but not least, I am excited that we are bringing DrupalCon to Latin America next week. This is the fruit of many hours spent by passionate volunteers in the Latin American local communities, working together with the Drupal Association to figure out how to make a DrupalCon happen in this part of the world. At every DrupalCon we have had so far, we have seen an increase in energy for the project and a bump in engagement. Come for the software, stay for the community! Hasta pronto!

Categories: Drupal

Aten Design Group: Removing Duplicate Content Across Multiple Drupal Views

6 February 2015 - 11:31am

Views is an indispensable and powerful module at the heart of Drupal that you can use to quickly generate structured tables or lists of consistently formatted content, and filter and group that content by simple or complex logic. But in pushing Views to do ever more complex and useful things, we can sort of paint ourselves into a corner sometimes. For instance, I have many times created multiple Views displays on a single page that contain overlapping content. My homepage has a Views display of manually curated content, using Nodequeue or a similar module. On the same homepage, I have a Views display of news content that shows the most recent content. Since the two different Views displays pull from the same bucket of content, it is very possible to have duplicate content across the displays. Here is an example:

Notice the underlined duplicate titles across the two Views displays.

This is what we want:

Notice the missing featured titles from the deduped Views display.

By creating a custom Drupal module and utilizing a Views hook, we can remove the duplicate content across the two Views displays. We programmatically check exactly which pieces of content are in one View, and we feed that information to a filter in the second View that excludes it.

Before diving into my example, I want to cover a few assumptions I’m making about you.
  • You are using Drupal 7
  • You are familiar with Views module
  • You know how to install modules
  • You know at least a touch of PHP
Steps to Follow Along

View Example Code on Github

Step 1

My example code assumes that you have created two Views displays.

  • Featured - A View display of manually curated content. This display will be used to generate a list of content to exclude from our automated Views display.
  • Automated - A View display of news content that shows the most recent content. This display will accept a list of content to be excluded.

You can of course adapt the Views displays to your exact needs.

After creating the Views you wish to use, you’ll need to know the machine name of the View and View display.

One way to retrieve these names is from the view edit URL. While editing your view, notice the URL:

/admin/structure/views/view/automated_news/edit/block

In my case, automated_news is the view name and block is the view display name.

Make a note of your machine names for Step 3

Step 2

On the view you wish to dedup or exclude content from, you’ll need to add and configure a contextual filter.

  1. Navigate to edit the automated content view
  2. Under “Advanced” & “Contextual Filters”, click add and select “Content: Nid (The node ID.)”
  3. Select “Provide default value” and choose “Fixed value”.
  4. Leave the Fixed value empty as we’ll provide this in code
  5. Under “More” select “Allow multiple values” and “Exclude”
  6. Save the view
Step 3

Enable your custom module that contains the deduping code. You are welcome to download the example module on Github and use it, or add the code to an existing custom module if it makes more sense. In any case, you’ll need to customize the module a little bit to work with your Views.

  1. Update the machine name variables from Step 1. See $featured_view_name, $featured_view_display, $automated_view_name and 2. $automated_view_display
  2. Save your module
  3. Enable your module
  4. Clear your Drupal cache

If everything was configured correctly, you should see your Views displays properly deduped.

Code Explained

View Example Code on Github

The code relies on hook_views_pre_view(), a Views hook. Using this hook, we can pass values to the Views display contextual filter set in Step 2. Here is a version where content IDs (NIDs) 1, 2, 5 & 6 are manually being passed to a view for exclusion.

/** * @implements hook_views_pre_view(). * * https://api.drupal.org/api/views/views.api.php/function/hook_views_pre_view/7 */ function hook_views_pre_view(&$view, &$display_id, &$args){ // Check for the specific View name and display if ($view->name == ‘automated_news’ && $display_id == ‘block’) { $args[] = 1+2+5+6; } }

There are many ways you could dynamically build a list of NIDs you wish to exclude. In my example, we are loading another Views display to build a list of NIDs. The function views_get_view() loads a Views display in code and provides access to the result set.

// Load the view // https://api.drupal.org/api/views/views.module/function/views_get_view/7 $view = views_get_view('automated_news'); $view->set_display('block'); $view->pre_execute(); $view->execute();   // Get the results $results = $view->result;

Drupal Views is a powerful module and I like the ability to extend it even further using the extensive Views hooks API. In the case of my example, we can keep using Views with writing complex database queries.

Categories: Drupal

Annertech: 5 Tips for a Responsive Website

6 February 2015 - 10:36am
5 Tips for a Responsive Website

Last month I wrote about why we care about responsive websites, and why you should too. This month I'm going to brush the surface of how one might achieve such a goal.

Responsive Buzzword Bingo

I'm not about to go knee-deep into the semantics of the various jargon words surrounding this topic and their pros and cons, but here are broad descriptions of some of the approaches.

Categories: Drupal

Dcycle: Two tips for debugging Simpletest tests

6 February 2015 - 7:52am

I have been using Simpletest on Drupal 7 for several years, and, used well, it can greatly enhance the quality of your code. I like to practice test-driven development: writing a failing test first, then run it multiple times, each time tweaking the code, until the test passes.

Simpletest works by spawning a completely new Drupal site (ignoring your current database), running tests, and destroying the database. Sometimes, a test will fail and you're not quite sure why. Here are two tips to help you debug why your tests are failing:

Tip #1: debug()

The Drupal debug() function can be placed anywhere in your test or your source code, and the result will appear on the test results page in the GUI.

For example, if when you are playing around with the dev version of your site, things work fine, but in the test, a specific node contains invalid data, you can add this line anywhere in your test or source code which is being called during your test:

... debug($node); ...

This will provide formatted output of your $node variable, alongside your test results.

Tip #2: die()

Sometimes the temporary test environment's behaviour seems to make no sense. And it can be frustrating to not be able to simply log into it and play around with it, because it is destroyed after the test is over.

To understand this technique, here is quick primer on how Simpletest works:

  • In Drupal 7, running a test requires a host site and database. This is basically an installed Drupal site with Simpletest enabled, and your module somewhere in the modules directory (the module you are testing does not have to be enabled).
  • When you run a test, Simpletest creates a brand-new installation of Drupal using a special prefix simpletest123456 where 123456 is a random number. This allows Simpletest to have an isolated environment where to run tests, but on the same database and with the same credentials as the host.
  • When your test does something, like call a function, or load a page with, for example, $this->drupalGet('user'), the host environment is ignored and temporary environment (which uses the prefixed database tables) is used. In the previous example, the test loads the "user" page using a real HTTP calls. Simpletest knows to use the temporary environment because the call is made using a specially-crafted user agent.
  • When the test is over, all tables with the prefix simpletest123456 are destroyed.

If you have ever tried to run a test on a host environment which already contains a prefix, you will understand why you can get "table name too long" errors in certain cases: Simpletest is trying to add a prefix to another prefix. That's one reason to avoid prefixes when you can, but I digress.

Now you can try this: somewhere in your test code, add die(), this will kill Simpletest, leaving the temporary database intact.

Here is an example: a colleague recently was testing a feature which exported a view. In the dev environment, the view was available to users with the role manager, as was expected. However when the test logged in as a manager user and attempted to access the view, the result was an "Access denied" page.

Because we couldn't easily figure it out, I suggested adding die() to play around in the environment:

... $this->drupalLogin($manager); $this->drupalGet('inventory'); die(); $this->assertNoText('denied', 'A manager accessing the inventory page does not see "access denied"'); ...

Now, when the test was run, we could:

  • wait for it to crash,
  • then examine our database to figure out which prefix the test was using,
  • change the database prefix in sites/default/settings.php from '' to (for example) 'simpletest73845'.
  • run drush uli to get a one-time login.

Now, it was easier to debug the source of the problem by visiting the views configuration for inventory: it turns out that features exports views with access by role using the role ID, not the role name (the role ID can be different for each environment). Simply changing the access method for the view from "by role" to "by permission" made the test pass, and prevented a potential security flaw in the code.

(Another reason to avoid "by role" access in views is that User 1 often does not have the role required, and it is often disconcerting to be user 1 and have "access denied" to a view.)

So in conclusion, Simpletest is great when it works as expected and when you understand what it does, but when you don't, it is always good to know a few techniques for further investigation.

Tags: blogplanet
Categories: Drupal

OpenLucius: A robot in your Drupal social intranet / extranet – why and how?

6 February 2015 - 2:15am

If you work with a team on projects, then there are (obviously) tasks to share. Including tasks to be followed up by your clients.

For example: the delivery of a design in Photoshop/fireworks for their new social intranet.

Now it can happen that somebody does not follow-up on his/her task in time resulting in problems for your planning. Usually this is not on purpose, often they simply 'forgot'.

Categories: Drupal

Drupal core announcements: Princeton Critical Sprint Recap

5 February 2015 - 6:35pm

At the end of January, 2015, sprinters gathered in Princeton, NJ, USA for a focused D8 Accelerate sprint designed to accelerate work on critical and upgrade-path-blocking issues related to menus, menu links, and link generation.

The sprint was coordinated with the 4th annual DrupalCamp NJ. pwolanin, dawehner, kgoel, xjm, Wim Leers, mpdonadio, YesCT, effulgentsia, and tim.plunkett participated onsite. (In addition to the D8 Accelerate Group, local Drupalists davidhernandez, cilefen, crowdcg, wheatpenny, ijf8090, and HumanSky joined the sprint primarily to work on Drupal 8 Twig and theme issues, and EclipseGC and evolvingweb dropped in too.)

The sprint benefitted from pre-sprint planning meetings and discussion with the sprinters and a broader group of contributors (including webchick and catch, as well as amateescu, larowlan, Gábor Hojtsy, Bojhan, and Crell), and daily support from webchick to track, summarize, and unblock progress with issue posts and commits so the sprinters could move on to the next steps.

Thanks to the pre-sprint planning, sprint focus, and the tremendous experience of the participants and their history of working together on hard issues in the past, this sprint achieved a very high level and breadth of success. Sprinters worked on a total of 17 critical issues (14 of which are now fixed) as well as 27 other related bugs and DX fixes. All the issues opened or worked on during the sprint can bee seen under the tag D8 Accelerate NJ.

Take-away lessons

Identifying key issues in advance made the sprint more productive, as did meeting via video chat and in IRC to discuss possible solutions ahead of time. The pending deadline of the sprint helped push contributors to forge consensus and begin work on the issues before the event even happened. Never underestimate the value of a hard deadline!

As always, having the group in the same room (and timezone) with a whiteboard allowed resolution of discussions that would have taken weeks via issue comments and online meetings. We also were able to scale our progress with occasional pair programming and pair code review - very effective for ramping up skilled sprinters to unfamiliar and difficult problem spaces.

In addition, while the sprint was happening at the same time as DrupalCamp NJ activities (and for 2 days in the same building), the sprinters deliberately avoided the presentations or general Drupal mentoring they might have done in other circumstances. This relative lack of distractions was part of what we learned made the prior Ghent sprint a success and it helped maintain the focus at this sprint as well.

The sprinters stayed in 2 adjoining hotels, which made coordination easy.

Changing the sprint room each day initially seemed like it might be a drawback, but instead seemed to keep things a bit fresher. Note, however, that every room had windows and natural light - especially important the first days as people were dealing with jet lag.

It's off-season for New Jersey in January, so the low flight costs that allowed us to fund many more people to come and also accommodated people who made travel plans as late as a week prior to the event. This allowed us to recruit more participants even with a very short time frame to plan. (When the sprint was first given the D8 Accelerate Grant at the end of December, we had only 3 confirmed attendees and just a rough idea of the issues and goals to be addressed.)

Sponsors

The sprint was sponsored by a Drupal Association grant and by Princeton University Web Development Services providing space and logistical support.

In addition, Black Mesh sponsored all travel costs for YesCT, Forum One provided time off for kgoel, Night Kitchen Interactive provided time off for mpdonadio, and Acquia provided several employees' time (pwolanin, effulgentsia, xjm, tim.plunkett, and Wim Leers).

Daily sprint updates from webchick

These daily issue summaries were originally provided by webchick on [meta] Finalize the menu links system.

January 27

A very hyped snow storm leads to the cancelation of all 3 flights coming from Europe - but the snow fell further North and East, so all 3 participants were able to reschedule for the next day.

January 28

Most participants arrived in Princeton and settled in.

January 29

Day one of the sprint! Occupying the lounge at the NE corner of 701 Carnegie, part of the facilities of Princeton University.

Dinner plans were inspired by the DrupalCamp NJ theme for 2015 - a New Jersey diner! Just reading the menu was an exotic treat for the Europeans.

January 30

Occupying a multi-purpose room at the SE Corner of 701 Carnegie.

At the same time, about 70 people participated in 4 Drupal training courses in other rooms on the ground floor.

Thanks to the prompting of Tim Plunkett, dinner was real New Jersey pizza at Nino's Pizza Star in Princeton (a local favorite among the Central NJ Drupal meetup regulars). EclipseGC even treated the group to a Nutella pizza for dessert!

January 31

Occupying room 111 at the Friend Engineering Center, on the campus of Princeton University. In the neighboring rooms the sessions and BoFs were happening for the 4th annual DrupalCamp NJ. The sprinters were counted among the 257 registered attendees.

February 1

Occupying a (paid) meeting room at the hotel where most sprinters were staying.

Apparently there was some football game going on too.

While most people are headed home tomorrow, there are a few stalwart hangers-on who are staying through to Tuesday.


February 2

People worked together at the hotel or remotely. A Farewell lunch in Princeton was followed by a brief look at the Princeton University campus as a scenic amount of snow fell again.

Categories: Drupal

Mediacurrent: Introducing the Mediacurrent Dropcast!

5 February 2015 - 2:03pm

Our inaugural episode. Team Kool-Aide starts a podcast and we talk about a variety of topics taken from The Weekly Drop.

Your browser does not support the audio element.
Episode 0 Audio Download Link

 

Categories: Drupal

more onion - devblog: Stale static cache - you're likely to have seen this bug!

5 February 2015 - 1:18pm

This week I've finally found the core of several issues that I've had in the past. Are you using install-profiles or features? Then this bug is likely to have affected you too.

Tags:
Categories: Drupal

Drupal core announcements: All the sprints at and around Drupal Dev Days Montpellier France

5 February 2015 - 11:57am
Start:  2015-04-13 09:00 - 2015-04-19 09:00 Europe/Zurich Sprint

http://montpellier2015.drupaldays.org

We have a great tradition of extended sprints around big Drupal events including DrupalCons and Drupal Dev Days. Given that a lot of the Drupal core and contrib developers fly in for these events, it makes a lot of sense to use this opportunity to start sooner and/or extend our stay and work together in one space on the harder problems.

Drupal Dev Days Montpellier France is next up! Monday April 13 2015 to Sunday April 19 2015. The host event is looking for sponsors to help make the sprints happen, so you have a comfortable environment with internet, coffee, tea and maybe food. There are already various sprints signed up including Multilingual, Drupal 8 critical burndown, documentation, and Frontend. We are really friendly and need all kinds of expertise!

Now is the time to consider if you can be available and book your travel and hotel accordingly!

Join the sprinters -- sign up now! Practical details
Dates
April 13 to April 19
Times and locations
Day/Time Location April 13-19, 09:00 to 18:00. TBA, TBA April 13-19, 18:00 to 24:00. Hotel lobby, TBA, TBA.
Looking for sponsors

We are looking for more sponsors to be able to pay for extra expenses on the sprint too. If you are interested sponsoring or if you need sponsors to cover expenses, please contact me at https://drupal.org/user/258568/contact

Frequently asked questions What is a sprint?

Drupal sprints are opportunities to join existing teams and further Drupal the software, our processes, drupal.org and so on.

Do I need to be a pro developer?

No, not at all. First of all sprints include groups working on user experience, designs, frontend guidelines, drupal.org software setup, testing improvements, figuring out policies, etc. However you can be more productive at most sprints if you have a laptop.

How come there are 7 consecutive days of sprints?

We are all travel to the same place. We try to use this time to share our knowledge as well as further the platform in all possible ways. Therefore there is almost always an opportunity and a place to participate in moving Drupal forward.

What if I'm new to Drupal and/or sprinting, how can I join?

There will be no formal mentoring, but there will be a place for you. Once you get there, hopefully someone will introduce themselves and help you find your place in the sprint. If not, please reach out and say, "Hi, I'm new to sprinting, but I want to help." And then, someone will find you a group of sprinters to join. Expect your first day to be mostly about finding a group or a couple issues, reading them, understanding, and getting set up to work on them. Your second day you will probably get some progress on things. And then your third, forth, etc day *you* will be getting things done (and maybe helping people who are there for their first day.)

I worked on Drupal before, which sprints are for me?

If you have experience with Drupal issues and maybe already know a team/topic, jump right in, but of course if you have questions, there are always plenty of friendly people to help you.

Why do I have to sign up?

These sprints are broken down to teams working on different topics. It is very important that you sign up for them, so we know what capacity to plan with, so we have enough space (and maybe food/coffee).

Further questions?

Ask at https://drupal.org/user/258568/contact, I am happy to answer.

#node-427578 .picture, #node-427578 h3 { display: none; } #node-427578 .field-type-datestamp { margin: 0 0 2em 0; } #node-427578 dl { margin-bottom: 1em; } #node-427578 dd { margin-top: 0.5em; } #node-427578 h3.content { display: block; }
Categories: Drupal


Google+
about seo