Drupal

miggle: learning Drupal in a week - my first job experience

Planet Drupal - 24 January 2018 - 1:38am
learning Drupal in a week - my first job experiencefriends of miggle Wed, 24/01/2018 - 09:38 Upon arriving I was welcomed to the office and settled in at a desk. Initially, I was tasked with exploring Drupal and what it could do. Acquia Dev Desktop was the first application I opened and after experimenting with some of the prebuilt sites I began to gather an understanding of Drupal and why it is used. 
Categories: Drupal

INsReady: Single Sign-on using OAuth2 and JWT for Distributed Architecture

Planet Drupal - 23 January 2018 - 9:35pm

Single sign-on (SSO) is a property, where a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system. A simple version of single sign-on can be achieved over IP networks using cookies but only if the sites share a common DNS parent domain. ---- https://en.wikipedia.org/wiki/Single_sign-on

As the definition suggests, one can imagine that SSO becomes one critical part of the system design and user experience design for complex and distributed system, or for a new application to integrate with the existing connected system. With SSO enabled, a system owner can manage access control at a centralized place, therefore granting users permissions cross multiple subsystem is organized. On the other hand, as an end user, he/she only needs to secure one set of credentials to access multiple resources or to access functionalities whose distributed architecture is hidden from the user.

As we entering 2018, our software becomes more complex and its services become more ubiquitous. Let's use Google's SSO for example to illustrate the demand for a modern SSO:

  • A user can sign in with password once for both Gmail.com and YouTube.com
  • A user can go to Feedly.com or New York Times and use the "Sign-in with Google" to authorize third parties to access the user's data
  • A user can sign in with password on a mobile device to sync all photos or contacts from Google
  • A Google Home device can connect to multiple people's Google accounts, and read out their calendar events when needed
  • YouTube.com developers can use Polymer as frontend technology, and authenticate with YouTube.com backend to load the content via web services API

You might not realize the complexity of such system to support the modern use cases above until your system needs one, and you need to develop the support. Let's translate the above use cases into SSO technical requirements:

  • Support SSO cross multiple domains
  • Support Password Grant (sing-in directly on the web), Authorization Code Grant (user authorizes third-party), Client Credentials Grant (Machine sign-in), and Implicit Grant (third-party web app sign-in)
  • Support distributed architecture, where your authentication server is not necessary on the same domain or at the same server as your resource servers
  • Web services API on resources server can effectively authenticate requests
  • No technology lock-in for authentication server, resource servers as well as client-side apps.
  • Support a seamless user authorization experience cross different client-side technology (Web, Mobile or IoT), and cross different first-party and third-party applications

Fortunately, we can leverage existing open standards and open source software to implement a SSO for a distributed system. First, we will rely on OAuth 2.0 Authorization Framework and JSON Web Token (JWT) open protocols. OAuth 2.0 is used to support common authentication workflows; in fact, the above 4 types of grants in the requirements are the terminologies borrowed from OAuth 2.0 protocol. JWT protocol is used to standardize the sharing of a successful authentication result cross clients apps and resources servers. The protocol allows resources server to trust a client request without double checking with authentication server, which lowers the amount of communication within a distributed system, therefore increases the performance of overall authentication and identification. For more technical details on how to use OAuth 2.0 and JWT for authentication, please see Stateless authentication with OAuth 2 and JWT - JavaZone 2015.

Regarding to building the authentication sever, where all users and machines will sign-in, authenticate, authorize, or identify themselves, the critical requirement for the authentication server is that this server implements OAuth 2.0 protocol and use JWT as the bearer token. As long as the authentication server implements the protocols, the rest of facilitating features can be built on any technology. I like use simple_oauth module with Drupal 8, because out-of-box, this solution is the whole application, including users, consumers and tokens management. Particularly, I have been helping to optimize the user experience of user authorization process for different use cases. If you are not familiar with Drupal, a particular distribution Contenta CMS has pre-packaged simple_oauth and its dependencies for you.

Once the authentication server is in place, we will implement the protocol and workflows on resource server and client-side apps. This part is largely up to your resource server and client-side technologies you picked. We are building this part of integration with Node.js, Laraval, Drupal 7 and Drupal 8 applications. As the time of writing, we have published the module oauth2_jwt_sso on Drupal 8.

I leave the extensibility, limitation, and more technical details of this SSO solution for the upcoming DrupalCon Nashville session. I will include the session video here in late Apri, 2018.

Files:  SSO diagram.pngTag: SSOOAuth2JWTDecoupledDistributedArchitectureSecurityDrupal Planet
Categories: Drupal

PreviousNext: Better image optimisation in Drupal

Planet Drupal - 23 January 2018 - 7:08pm

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

by Tony Comben / 24 January 2018

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

Picking your server-side compression tool

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07% Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28% JPEG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0% Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

The final pipeline looks like the following:

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Tagged Image Optimisation
Categories: Drupal

MidCamp - Midwest Drupal Camp: We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

Planet Drupal - 23 January 2018 - 4:51pm
We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

We are so excited to have Chris as our keynote speaker this year.  He is the President and Founder of Digital Bridge Solutions, a Drupal and Magento Agency here in Chicago that has been a supporter of MidCamp since its inception. 

His presentation at our 2017 event, Whitewashed - Drupal's Diversity Problem And How To Solve It, was a deep, and eye-opening look at diversity in Drupal, and the greater tech world, and how we can go about making it better.

Since then, he has been partnered with Palantir.net on an ambitious inclusion initiative working with students to introduce them to Drupal.  Last year, they brought a group of students from Baltimore to DrupalCon Baltimore.  They have held Drupal training sessions here in Chicago, and are currently working to bring students from Genesys Works and NPower to DrupalCon Nashville.

Chris' presentation will be a collective group journey into sensitive and vulnerable territories, but promises interactivity, a safe space for the exchange of ideas, and perhaps even a little humor.  We hope you join us for it.

Session Submissions close Friday!

MidCamp is looking for folks just like you to speak to our Drupal audience! Experienced speakers are always welcome, but our camp is also a great place to start for first-time speakers.

MidCamp is soliciting sessions geared toward beginner through advanced Drupal users. Know someone who might be a new voice, but has something to say? Please suggest they submit a session.

Find out more at: Buy a Ticket

Tickets and Individual Sponsorships are available on the site for MidCamp 2018.

Click here to get yours!

Schedule of Events
  • Thursday, March 8th, 2018 - Training and Sprints
  • Friday, March 9th, 2018 - Sessions and Social
  • Saturday, March 10th, 2018 - Sessions and Social
  • Sunday, March 11th, 2018 - Sprints
Sponsor MidCamp 2018!

Are you or your company interested in becoming a sponsor for the 2018 event? Sponsoring MidCamp is a great way to promote your company, organization, or product and to show your support for Drupal and the Midwest Drupal community. It also is a great opportunity to connect with potential customers and recruit talent.

Find out more at:

Volunteer for MidCamp 2018

Want to be part of the MidCamp action? We're always looking for volunteers to help out during the event.  We need registration table help, room monitors, help with setting up the venue, and help clearing out.  Sign up at http://bit.ly/midcamp-volunteer-signup and we'll be in touch shortly!

We hope you'll join us at MidCamp 2018!

Categories: Drupal

Dcycle: Caching a Drupal 8 REST resource

Planet Drupal - 23 January 2018 - 4:00pm

Here are a few things I learned about caching for REST resources.

There are probably better ways to accomplish this, but here is what works for me.

Let’s say we have a rest resource that looks something like this in my_module/src/Plugin/rest/resource/MyRestResource.php and we have enabled it using the Rest UI module and given anonymous users permission to view it:

<?php namespace Drupal\my_module\Plugin\rest\resource; use Drupal\rest\ResourceResponse; /** * This is just an example. * * @RestResource( * id = "this_is_just_an_example", * label = @Translation("Display the title of node 1"), * uri_paths = { * "canonical" = "/api/v1/get" * } * ) */ class MyRestResource extends ResourceBase { /** * {@inheritdoc} */ public function get() { $node = node_load(1); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'time' => time(), ] ); return $response; } }

Now, we can visit http://example.localhost/api/v1/get?_format=json and we will see something like:

{"title":"Some Title","time":1516803204}

Reloading the page, ‘time’ stays the same. That means caching is working; we are not re-computing our Json output each time someone requests it.

How to invalidate the cache when the title changes.

If we edit node 1 and change its title to, say, “Another title”, and reload http://example.localhost/api/v1/get?_format=json, we’ll see the old title. To make sure the cache is invalidated when this happens, we need to provide cacheability metadata to our response telling it when it needs to be recomputed.

Our node, when it’s loaded, contains within it all the caching metadata needed to describe when it should be recomputed: when the title changes, when new filters are added to the text format that’s being used, etc. We can add this information to our ResourceResponse like this:

... $response->addCacheableDependency($node); return $response; ...

When we clear our cache with drush cr and reload our page, we’ll see something like:

{"title":"Another title","time":1516804411}

We know this is still cached because the time stays the same no matter how often we load the page. Try it, it’s fun!

Even more fun is changing the title of node 1 and reloading our Json page, and seeing the title change without clearing the cache:

{"title":"Yet another title","time":1516804481} How to set custom cache invalidation events

Let’s say you want to trigger a cache rebuild for some reason other than those defined by the node itself (title change, etc.).

A real-world example might be events: an “upcoming events” page should only display events which start later than now. If we invalidate the cache every day, then we’ll never show yesterday’s events in our events feed. Here, we need to add our custom cache invalidation event, in this case “rebuild events feed”.

For the purpose of this demo, we won’t actually build an events feed, but we’ll see how cron might be able to trigger cache invalidation.

Let’s add the following code to our response:

... use Drupal\Core\Cache\CacheableMetadata; ... $response->addCacheableDependency($node); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray([ '#cache' => [ 'tags' => [ 'rebuild-events-feed', ], ], ])); return $response; ...

This uses Drupal’s cache tags concept and tells Drupal that when the cache tag ‘rebuild-events-feed’ is invalidated, all cacheable responses which have that cache tag should be invalidated as well. I prefer this to the ‘max-age’ cache tag because it allows us more fine-grained control over when to invalidate our caches.

On cron, we could only invalidate ‘rebuild-events-feed’ if events have passed since our last invalidation of that tag, for example.

For this example, we’ll just invalidate it manually. Clear your cache to begin using the new code (drush cr), then load the page, you will see something like:

{"hello":"Yet another title","time":1516805677}

As always, the time remains the same no matter how many times you reload the page.

Let’s say you are in the midst of a cron run and you have determined that you need to invalidate your cache for response which have the cache tag ‘rebuild-events-feed’, you can run:

\Drupal::service('cache_tags.invalidator')->invalidateTags(['rebuild-events-feed'])

Let’s do it in Drush to see it in action:

drush ev "\Drupal::service('cache_tags.invalidator')->\ invalidateTags(['rebuild-events-feed'])"

We’ve just invalidated our ‘rebuild-events-feed’ tag and, hence, Responses that use it.

The dreaded “leaked metadata” error

This one is beyond my competence level, but I wanted to mention it anyway.

Let’s say you want to output your node’s URL to Json, you might consider computing it using $node->toUrl()->toString(). This will give us “/node/1”.

Let’s add it to our code:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(), 'time' => time(), ...

This results in a very ugly error which completely breaks your site (at least at the time of this writing): “The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.”.

The problem, it seems, is that Drupal detects that the URL object, like the node we saw earlier, contains its own internal information which tells it when its cache should be invalidated. Converting it to a string prevents the Response from being informed about that information somehow (again, if someone can explain this better than me, please leave a comment), so an exception is thrown.

The ‘toString()’ function has an optional parameter, “$collect_bubbleable_metadata”, which can be used to get not just a string, but also information about its cache should be invalidated. In Drush, this will look like something like:

drush ev 'print_r(node_load(1)->toUrl()->toString(TRUE))' Drupal\Core\GeneratedUrl Object ( [generatedUrl:protected] => /node/1 [cacheContexts:protected] => Array ( ) [cacheTags:protected] => Array ( ) [cacheMaxAge:protected] => -1 [attachments:protected] => Array ( ) )

This changes the return type of toString(), though: toString() no longer returns a string but a GeneratedUrl, so this won’t work:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(TRUE), 'time' => time(), ...

It gives us the error “Could not normalize object of type Drupal\Core\GeneratedUrl, no supporting normalizer found”.

ohthehugemanatee commented on Drupal.org on how to fix this. Integrating his suggestion, our code now looks like:

... $url = $node->toUrl()->toString(TRUE); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); ...

This will now work as expected.

With all the fun we’re having, though let’s take this a step further, let’s say we want to export the feed of frontpage items in our Response:

$url = $node->toUrl()->toString(TRUE); $view = \Drupal\views\Views::getView("frontpage"); $view->setDisplay("feed_1"); $view_render_array = $view->render(); $rendered_view = render($view_render_array); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'view' => $rendered_view, 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

You will not be surpised to see the “leaked metadata was detected” error again… In fact you have come to love and expect this error at this point.

Here is where I’m completely out of my league; according to Crell, “[i]f you [use render() yourself], you’re wrong and you should fix your code “, but I’m not sure how to get a rendered view without using render() myself… I’ve implemented a variation on a comment on Drupal.org by mikejw suggesting using different render context to prevent Drupal from complaining.

$view_render_array = NULL; $rendered_view = NULL; \Drupal::service('renderer')->executeInRenderContext(new RenderContext(), function () use ($view, &$view_render_array, &$rendered_view) { $view_render_array = $view->render(); $rendered_view = render($view_render_array); });

If we check to make sure we have this line in our code:

$response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

we’re telling our Response’s cache to invalidate whenever our view’s cache invaliates. So, for example, if we have several nodes promoted to the front page in our view, we can modify any one of them and our entire Response’s cache will be invalidated and rebuilt.

Resources and further reading

Here are a few things I learned about caching for REST resources.

Categories: Drupal

Commerce Bulk

New Drupal Modules - 23 January 2018 - 3:14pm

Provides a service for bulk creation of Drupal Commerce entities. For now just
product variations could be bulk created on a product add or edit form. See more on the module's Github repository:

https://github.com/drugan/commerce_bulk

Categories: Drupal

BYU Calendar

New Drupal Modules - 23 January 2018 - 2:07pm

This widget uses the Events API. You can read about BYU APIs here: https://calendar.byu.edu/byu-calendar-api-documentation

The specific API this module uses is: https://calendar.byu.edu/events-api

You can read more about the categories or see current values and information through: The All Categories API: https://calendar.byu.edu/all-categories-api

Categories: Drupal

Drupal.org Featured Case Studies: Chicago Park District Website

Planet Drupal - 23 January 2018 - 1:39pm
Completed Drupal site or project URL: https://www.chicagoparkdistrict.com/

The Chicago Park District owns more than 8,800 acres of green space, making it the largest municipal park manager in the nation. The Chicago Park District’s more than 600 parks offer thousands of sports and physical activities as well as cultural and environmental programs for youth, adults, and seniors. The Chicago Park District is also responsible for 28 indoor pools, 50 outdoor pools, and 26 miles of lakefront including 23 swimming beaches plus one inland beach.

Clarity redesigned, built, and hosts the official website for the Chicago Park District (CPD). Clarity designed and developed this user-friendly, mobile-responsive site, with a unified look and feel and marketing emphasis to promote CPD’s parks, programs, and events. The new website acts as a solution focused on its customers – “front end” visitors of the website and “back end” content administers – both of whom have a wide scope of needs.

Specifically, the new site provides the following improvements and features:

  • New Content Management System (CMS) Platform
    • Drupal 8, the latest version of the popular open-source framework;
    • Allows CPD to more easily integrate and connect to third-party tools, such as
      • ActiveNet, which provides externally-hosted ecommerce functions;
      • AppliTrack, which provides job postings;
      • Bonfire, which provides procurement and contracting opportunities;
      • MailChimp, which provides newsletter signup capabilities.
  • Updated design based on user focus group reactions to the old site, including
    • A cleaner, refreshed look built for devices of all sizes;
    • Home page updates that allow CPD staff to push more information in a more organized fashion;
    • Larger emphasis on maps (hugely important for such a large metropolitan area);
    • The ability to highlight features and attractions, such as artworks and natural areas, that CPD has to offer both residents and visitors;
    • Overall increased speed and performance.
  • Improved administrative functions that allow for
    • Distributed content responsibilities;
    • Workflow approvals to ensure editorial integrity;
    • More modular administrative tools allowing CPD to highlight location details such as accessibility features

With its new site, Chicago Park District is now poised to better serve the long-term needs of residents and visitors for years to come.

Categories: Drupal

Tasker

New Drupal Modules - 23 January 2018 - 1:15pm
Categories: Drupal

License Field

New Drupal Modules - 23 January 2018 - 10:26am

This project was started in response to https://www.drupal.org/project/creative_commons/issues/2938260

Rather than a Creative Commons specific module, the idea is to add support for all SPDX license codes and let the user select the supported license per field instance.

The idea is that this field can be used on node content types as well as Media entities.

Categories: Drupal

Web Wash: Getting Started with Bootstrap in Drupal 8

Planet Drupal - 23 January 2018 - 8:00am
Bootstrap is a front-end framework for building websites. It ships prebuilt CSS and JavaScript components that make building sites fast. It comes with all sorts of common components that every website needs such as a grid system, buttons, drop-down, responsive form elements, carousel (of course) and so much more. As a developer I don't want to spend time styling yet another button. I just want to know which CSS class to add to an tag so it looks like a button and I'm good to go. One complaint about Bootstrap is you can spot it a mile away because a lot of developers use the default look-and-feel. When you see the famous Jumbotron you know it's a Bootstrap site. But with a little bit of effort you can make your site look unique.

Aten Design Group: Using Address Fields in Configuration Forms

Planet Drupal - 23 January 2018 - 7:40am

In Drupal 7, the Address Field module provided developers an easy way to collect complex address information with relative ease. You could simply add the field to your content type and configure which countries you support along with what parts of an address are needed. However, this ease was limited to fieldable entities. If you needed to collect address information somewhere that wasn’t a fieldable entity, you had a lot more work in store for you. Chances are good that the end result would be as few text fields as possible, no validation, and only supporting with a single country. If you were feeling ambitious, maybe you would have provided a select list with the states or provinces provided via a hardcoded array.

During my most recent Drupal 8 project I wanted to collect structured address information outside the context of an entity. Specifically, I wanted to add a section for address and phone number to the Basic Site Settings configuration page. As it turns out, the same functionality you get on entities is now also available to the Form API.

Address Field’s port to Drupal 8 came in the form of a whole new module, the Address module. With it comes a new address form element. Let’s use that to add a “Site Address” field to the Basic Settings. First we’ll implement hook_form_FORM_ID_alter() in a custom module’s .module file:

use Drupal\Core\Form\FormStateInterface;   function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Overrides go here... }

Don’t forget to add use Drupal\Core\Form\FormStateInterface; at the top of your file. Next, we’ll add a details group and a fieldset for the address components to go into:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Create our contact information section. $form['site_location'] = [ '#type' => 'details', '#title' => t('Site Location'), '#open' => TRUE, ];   $form['site_location']['address'] = [ '#type' => 'fieldset', '#title' => t('Address'), ]; }

Once the fieldset is in place, we can go ahead and add the address components. To do that you’ll first need to install the Address module and its dependencies. You’ll also need to add use CommerceGuys\Addressing\AddressFormat\AddressField; at the top of the file as we’ll need some of the constants defined there later.

use Drupal\Core\Form\FormStateInterface; use CommerceGuys\Addressing\AddressFormat\AddressField;   function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => ['country_code' => 'US'], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ]; }

There’s a few things we’re doing here worth going over. First we set '#type' => 'address', which the Address module creates for us. Next we set a #default_value for country_code to US. That way the United States specific field config is displayed when the page loads.

The #used_fields key allows us to configure which address information we want to collect. This is done by passing an array of constants as defined in the AddressField class. The full list of options is:

AddressField::ADMINISTRATIVE_AREA AddressField::LOCALITY AddressField::DEPENDENT_LOCALITY AddressField::POSTAL_CODE AddressField::SORTING_CODE AddressField::ADDRESS_LINE1 AddressField::ADDRESS_LINE2 AddressField::ORGANIZATION AddressField::GIVEN_NAME AddressField::ADDITIONAL_NAME AddressField::FAMILY_NAME

Without any configuration, a full address field looks like this when displaying addresses for the United States.

For our example above, we only needed the street address (ADDRESS_LINE1 and ADDRESS_LINE2), city (LOCALITY), state (ADMINISTRATIVE_AREA), and zip code (POSTAL_CODE).

Lastly, we define which countries we will be supporting. This is done by passing an array of country codes into the #available_countries key. For our example we only need addresses from the United States, so that’s the only value we pass in.

The last step in our process is saving the information to the Basic Site Settings config file. First we need to add a new submit handler to the form. At the end of our hook, let’s add this:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // … address field code …   // Add a custom submit handler for our new values. $form['#submit'][] = 'MYMODULE_site_address_submit'; }

Now we’ll create the handler:

/** * Custom submit handler for our address settings. */ function MYMODULE_site_address_submit($form, FormStateInterface $form_state) { \Drupal::configFactory()->getEditable('system.site') ->set(‘address’, $form_state->getValue('site_address')) ->save(); }

This loads our site_address field from the submitted values in $form_state, and saves it to the system.site config. The exported system.site.yml file should now look something like:

name: 'My Awesome Site' mail: test@domain.com slogan: '' page: 403: '' 404: '' front: /user/login admin_compact_mode: false weight_select_max: 100 langcode: en default_langcode: en address: country_code: US langcode: '' address_line1: '123 W Elm St.' address_line2: '' locality: Denver administrative_area: CO postal_code: '80266' given_name: null additional_name: null family_name: null organization: null sorting_code: null dependent_locality: null

After that, we need to make sure our field will use the saved address as the #default_value. Back in our hook, let’s update that key with the following:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => \Drupal::config('system.site')->get('address') ?? [ 'country_code' => 'US', ], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ];   // … custom submit handler ... }

Using PHP 7’s null coalesce operator, we either set the default to the saved values or to a sensible fallback if nothing has been saved yet. Putting this all together, our module file should now look like this:

<?php   /** * @file * Main module file. */   use Drupal\Core\Form\FormStateInterface; use CommerceGuys\Addressing\AddressFormat\AddressField;   /** * Implements hook_form_ID_alter(). */ function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Create our contact information section. $form['site_location'] = [ '#type' => 'details', '#title' => t('Site Location'), '#open' => TRUE, ];   $form['site_location']['address'] = [ '#type' => 'fieldset', '#title' => t('Address'), ];   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => \Drupal::config('system.site')->get('address') ?? [ 'country_code' => 'US', ], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ];   // Add a custom submit handler for our new values. $form['#submit'][] = 'MYMODULE_site_address_submit'; }   /** * Custom submit handler for our address settings. */ function MYMODULE_site_address_submit($form, FormStateInterface $form_state) { \Drupal::configFactory()->getEditable('system.site') ->set(‘address’, $form_state->getValue('site_address')) ->save(); }

Lastly we should do some house cleaning in case our module gets uninstalled for any reason. In the same directory as the MYMODULE.module file, let’s add a MYMODULE.install file with the following code:

/** * Implements hook_uninstall(). */ function MYMODULE_uninstall() { // Delete the custom address config values. \Drupal::configFactory()->getEditable('system.site') ->clear(‘address’) ->save(); }

That’s it! Now we have a way to provide location information to the global site configuration. Using that data, I’ll be able to display this information elsewhere as text or as a Google Map. Being able to use the same features that Address field types have, I can leverage other modules that display address information or build my own displays, because I now have reliably structured data to work with.

Categories: Drupal

Views entity embed

New Drupal Modules - 23 January 2018 - 7:27am

Views entity embed module allows you to embed views using Embed module.

Categories: Drupal

Debounce

New Drupal Modules - 23 January 2018 - 3:03am

It is a backport of D8 debounce javascript.

Usage drupal_add_library('debounce', 'drupal.debounce'); JavaScript snippet var debounce = Drupal.debounce(function() { alert('Hello world'); }, 250); window.addEventListener('scroll', debounce);
Categories: Drupal

Icecast importer

New Drupal Modules - 23 January 2018 - 1:53am

Import title/info and multiple tracks from standard icecast xspf, to Drupal8 node.

Categories: Drupal

Amazee Labs: Practices - Amazee Agile Agency Survey Results - Part 9

Planet Drupal - 23 January 2018 - 1:10am
Practices - Amazee Agile Agency Survey Results - Part 9

This is part 9 of our series processing the results of the Amazee Agile Agency Survey. Previously I wrote about client interactions; this time let’s focus on practices. How often do teams deploy code? Are they practising peer reviews, automated testing, pair programming or story mapping?

Josef Dabernig Tue, 01/23/2018 - 10:10

When asked about “How often does your team deploy code?”, 53% of the teams answered they would do deployments “Rolling / Whenever necessary”. 13.3% deploy “About once a week”, another 13.3% “About every two weeks” and 6.7% answered they would deploy “Daily”. The remaining chose to go with freeform answers such as different frequencies based on the dev/stage/live environments or that it would depend on the client.

For us at Amazee, the deployment schedule depends on the needs of the clients. Thanks to the automatization that our Amazee.io hosting environment provides, any team member can execute a deployment on their own if it makes sense. Some High-availability clients require a fixed deployment schedule that our team has programmed to happen every week, besides that only critical hotfixes would be deployed instantly out of the schedule. Most of our clients allow us to deploy whenever, yet if a downtime for more complex deployments is needed we usually try to schedule them outside of business hours. For global customers that run their website across the globe, we try to find the deployment slot that fits best and rely on a proxy server like Varnish that keeps serving anonymous users during a deployment downtime.


Our second question was geared towards finding out which agile practices would be used by teams and how important they are considered. Contestants were able to rate from “Unknown”, “Not needed”, “Tried but failed” to “Somewhat in use”, “Actively in use” up to “Very important”. The practice that was mostly unknown is Mob programming.  Story mapping is also widely unknown but also has a good number of constants rating it with “Somewhat in use”. Pair programming is somewhat in use for many but also has a good number of contestants who responded “Unknown” or “Not needed”. The practices mostly rated as “Very important” where Peer reviews/code reviews as well as User testing. Automated testing got a lot of votes for “Somewhat in use”, and a few ones rated it as “Very important”. Per-ticket branch test environments have been rated as “Somewhat in use” by many as well.

For us at Amazee, we do Peer & code reviews for every work increment within our Scrum teams. This ensures code quality, knowledge transfer and feedback between team members. Automated testing happens for mission-critical features. Vasi has an article with good arguments why you should invest in it. User testing is performed on about a third of our projects. Automated deployments, continuous integration and per-ticket branch test environment are extensively used thanks again to the Amazee.io hosting environment goodies. Pair programming is quite common for our teams. While we have experimented with mob programming for teaching purposes, our team didn’t entirely pick it up. Finally, story mapping is something that we started using recently with good results, but we don’t have too much experience with it, yet.

Which practices do you use and how often do you do deployments? Please leave us a comment below. If you are interested in Agile Scrum training, don’t hesitate to contact us.

Stay tuned for the last post where we’ll do a round up of the Agile Agency Survey.

Categories: Drupal

Image Moderate

New Drupal Modules - 23 January 2018 - 12:38am

The module uses the Microsoft Azure Cognitive Services API to moderate images and prevent to show images with racist or adult content.
This module should be used on all sites where users are allowed to upload images (fe. in Community solutions)

About the API:

Categories: Drupal

Colorfield: Install Solr 7 for Drupal 8 Search API on Ubuntu 16.04

Planet Drupal - 22 January 2018 - 11:51pm
Install Solr 7 for Drupal 8 Search API on Ubuntu 16.04 christophe Tue, 23/01/2018 - 08:51 A brief introduction to Search API Solr, an update on the ecosystem and how to get Search API 2.x working on a dev environment with multiple collections.
Categories: Drupal

Drupal core announcements: Drupal 8 will require PHP 7.0 or higher starting March 6, 2019 (one year from now)

Planet Drupal - 22 January 2018 - 4:43pm

Drupal 8 will require PHP 7.0 or higher starting March 6, 2019. Drupal 8 users who are running Drupal 8 on PHP 5.5 or PHP 5.6 should begin planning to upgrade their PHP version to 7.0 or higher. Drupal 8.6 will be the final Drupal 8 version to support PHP 5, and will reach end-of-life on March 6, 2019, when Drupal 8.7.0 is released. (If 8.7.0 is released before March 6, 2019, the release number for the end-of-life will be updated accordingly, but the end-of-life date will remain the same.)

When planning for which PHP version to upgrade to, consider that PHP 7.2 was released on November 30, 2017 and will remain supported longer than older PHP 7 versions.

Why is support being dropped for PHP 5.5 and 5.6?
  • PHP 5.5 has already reached official end-of-life in 2016. Following that, a growing number of the PHP libraries used by Drupal 8 have also started to discontinue support for PHP 5.5.
  • PHP 5.6 stopped receiving active support from PHP maintainers in January 2017. This means that it is no longer receiving bugfixes, even for some very serious bugs that impact Drupal development.
  • PHP 5.6 is the final PHP 5 version, so the PHP maintainers are providing two years of security fixes for PHP 5.6 beyond its active support, through December 2018. This is a few months after Drupal 8.6's scheduled release and well before Drupal 8.7 would be released.
  • Drupal 8's automated tests require the PHPUnit library, which will drop support for PHP 5.6 in February 2018. Several other third-party dependencies are also dropping PHP 5.6 support in their latest versions.
  • To minimize disruption for both Drupal users and Drupal developers, Drupal 8's support of PHP 5.5 and PHP 5.6 will end at the same time.

We understand that upgrading from PHP 5 to PHP 7 may require time to plan and deploy. We suggest upgrading to PHP 7 in 2018 (rather than waiting for Drupal 8.7.0’s release).

What if I'm using a hosting service that doesn't offer PHP 7?

A majority of PHP hosting providers already offer PHP 7. If you're using one that doesn't, we suggest asking that provider when they will make it available, and if it's not until after March 2019, leave a comment on our tracking issue linking to that hosting provider, so that we can better understand the outliers, and perhaps offer some help.

What if I'm at an organization that maintains its own hosting, and we're using Ubuntu 14.04, which bundles PHP 5.5?

You have a few options if you are using Ubuntu 14.04:

  1. The preferred option is to plan an upgrade to Ubuntu 18.04 (to be released on April 2018, 2018). This version will be the most future-compatible.
  2. Another option is to upgrade Ubuntu 16.04, which is available now. You may need to upgrade Ubuntu again in a couple years if you choose to upgrade to 16.04 now.
  3. Finally, you can choose to upgrade to a separate build of PHP. Ondřej Surý provides a widely used PPA for doing this.
When will Drupal 8 drop support for PHP 7.0?

Support for PHP 7.0 will continue until at least March 6, 2019. We do not yet know whether Drupal 8's PHP 7.0 support will continue past that date, but we will post another announcement as soon as the end of PHP 7.0 support has been scheduled. We recommend you update to PHP 7.1 or higher since those versions will be supported longer.

How does this affect Drupal 8 core development?

Backported fixes account for about 80% of all changes and must continue to work on PHP 5.5 and 5.6 throughout Drupal 8.6.x's support cycle. For this reason, no PHP 7-only changes will be made until the 8.8.x branch is opened in early 2019 (or 8.9.x if 8.8.0 is released in 2018). Once 8.8.x is opened, the library dependencies in that branch can be updated to versions that have a PHP 7.0 requirement, and the Drupal code itself in that branch can begin relying on PHP 7 features. (Drupal 8 release cycle information)

The automated test suite already defaults to using PHPUnit 6 on environments that use PHP 7, but falls back to PHPUnit 4 on PHP 5. The fallback will be removed in the 8.8.x branch.

Does this affect Drupal 7?

No. Drupal 7 remains compatible with PHP 5.2.4 and higher. A separate announcement will be issued if and when that changes.

Categories: Drupal

Palantir: What “Content” Means to Different Teams

Planet Drupal - 22 January 2018 - 2:21pm
What “Content” Means to Different Teams brandt Mon, 01/22/2018 - 16:21 Ken Rickard Jan 23, 2018

The importance of aligning editorial, marketing, design, and development.

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.

As we’ve discussed before, understanding the content on your website is a critical element in the project plan. Today, we’d like to step back a bit and talk about how different teams in an organization might think about content.

First, let’s define our common teams by function:

  • The Editorial team produces and maintains content for the site.
  • The Marketing team sets strategy and metrics around successful audience engagement and interactions.
  • The UX Design team creates the strategy, visual and interactive components that comprise the site’s features.
  • The Development team builds and supports the site so that it fulfills the needs defined by the other three teams.

Note that these teams may all be organized within a single department (commonly marketing) or spread across the organization. Our concern here is not with organizational structure but rather with the perspective and concerns that are inherent in each team.

When teams start work on a new site or a site redesign, the most common mistake is for these four teams to work in silos, as if their individual tasks are unrelated to each other. In this case, a number of issues may arise:

  • A design may include elements that place extra burden on the editorial team.
  • An editorial workflow may require the development of custom code.
  • A marketing plan may ignore the limited editorial and design resources available to achieve its goals.
  • Organizations that have a history of heavily relying on non-digital media for marketing and promotions may have to figure out how to incorporate and plan for the digital work into the existing workflow.
  • A CMS implementation may not be able to produce certain essential design features, or budget and timeline prevents features from being designed a certain way.

Working together, teams can work through these types of issues before they become problems. To do so, it’s vital to get everyone speaking the same language around your content. We like to look at five specific factors when helping teams define their content strategy:

  • Audience defines the users and their needs and answers “who is this for?”
  • Purpose asks the question “what end result are we hoping to achieve?”
  • Workflow deals with the mechanics of content production, approval, publication, and presentation.
  • Transformation explores issues of translation and personalization, so that we define how the content might be modified in distinct contexts.
  • Structure defines the input and storage of the content and how it will be delivered to various publication media. The structure is directly affected by the needs outlined by the three previous items.

Each of these elements has a direct effect on each of our project teams. To understand how, Let’s take a look at Dr. Gillinov’s bio page at Cleveland Clinic to see how these questions bring focus to our project goals.

 

There are many elements that make up this comprehensive profile page and they all require each team member mentioned above to consider the following:

  1. Where does the data/content come from?
  2. What pieces of data/content is the editor responsible for?
  3. What does this page look like if it has all of the possible content types vs. physicians who have very little information?

For the purposes of this discussion, however, let’s focus on the top portion of the page addressing the data/content that makes up Dr. Gillinov’s basic information as it will help us illustrate our points.The first thing we look for here is the number of elements within the design pattern and how they might be produced. At first count, there are 11:

Let’s see how those elements break down.

  1. Picture – an uploaded image of the person.
  2. Video Link – a link to an external video service
  3. Rating – 1-5 stars based on patient feedback
  4. Rating Count – the number of patient ratings
  5. Comment Count – the number of patient comments
  6. Name – the name and honorifics for this person
  7. Department – the assigned internal department
  8. Primary Location – the main office location for this person
  9. Type of Doctor – indicates pediatrician, adult physician, or both
  10. Languages – a list of languages spoken
  11. Surgeon – indicates that this person is a licensed surgeon
Audience

There are multiple types of users that would view this page: potential patients, existing patients, families of patients, and medical professionals. Their needs are different based on who they are and where they are in their care journey.

Purpose

The primary purpose of this specific component is to provide basic information to the audience. The information presented helps them understand the services and availability of this doctor. The use of a picture and a video are designed to build trust by establishing a human connection in addition to the facts presented.

The inclusion of patient ratings serves as an impartial arbiter of the quality of services provided, while the department and location information helps people understand where they can go to receive treatment.

Workflow

For this example, the important question is “Which part of this page is editorial and which part is automated?” Here, the ratings pull in from a secondary system, which the editors do not control. The video is merely a link reference, but is editorial data. And while some of the doctor information might be pulled from an external system, here we assume that it can be edited for display on the web.

There is also an unlisted assumption here – call it feature #12 – about whether or not this doctor has active privileges at the hospital. Our editorial workflow needs to account for when an individual physician changes jobs, retires, or moves away.

Transformations

We use the term “transformations” here as a bit of a catch-all to describe how the data might need to change in different contexts. A common context shift is language.

When considering a multilingual website, we need to evaluate each element of the page for the desirability and feasibility of its translation.

Take the Video field for instance: Translating the link text for a video is trivial, but does the video itself need to be recorded in multiple languages (or at least subtitled)? Does it make sense to show a Spanish translation of the video link if the video is only in English?

The other most common transformation is personalization, wherein content elements are transformed based on our understanding of who the reader is and what they care about.

The key factor to consider about personalization is that it can create exponentially more work for the editorial team. Consider that for each element that desires personalization, we must create one new version for each variation. Let’s say that we want to segment our audience experience by three data points:

  • Returning patient (yes / no)
  • Local resident (yes / no)
  • Age cohort (child / adult / senior)

Now our one piece of content needs 2 x 2 x 3 = 12 variants, plus the original. For clarity, here’s how that looks mapped out: 

If we add in cases where one of the answers is not known, then the math becomes 3 x 3 x 4 = 36 plus the original variant.

As you can imagine, keeping track of those options can become a heavy editorial burden quite quickly if we were to personalize multiple elements on a page.

Structure

The above questions help inform how this page is structured on the back end. Additionally, we have to consider:

  • What fields do we need to capture and report this data?
  • What format should the data be displayed in?
  • What services (other than the website) might consume this data?
  • In what other contexts might this data be shown?

This last question gives an easy example of the type of decision that your programmers may need to make. To fully understand, let’s look for a minute at the contexts of a search result.

Here, the results are alphabetized by the physician’s last name. If we were to enter the physician’s name as it appears in English, “A. Mark Gillinov, MD”, a computer cannot natively sort by last name. We should also consider whether the honorific “MD” should influence sort order, and whether to sort by first and last name in the case of multiple matches to a common surname.

That generally leads to a separation of the sort field into a 14th field concept: Sort name. In our example the sort name is likely to be “Gillinov Mark A.” The remaining question is whether editors should provide that detail or if it should be automatically inferred by a custom element in the CMS.

Additionally, look at the elements that contain links:

  • Video
  • Ratings
  • Department
  • Primary Location

The target of these links needs to be captured, and the logic for that link generation accounted for in the CMS architecture. Further, can these elements be automatically derived from existing data (like the doctor’s name) or are they “hidden” metadata points that need to be added?

In most cases, the mapping for these elements is based on metadata:

  • Video – requires a unique URL for a YouTube video.
  • Ratings – requires a physician ID number provided by the ratings service.
  • Department –  selected from a list of Department pages controlled by the CMS.
  • Primary Location – selected from a list of Location pages controlled by the CMS and containing mapping metadata.

And to add one more element to the structure question: Which of these page elements allow for multiple selection? Can a doctor be part of two departments? Have three primary locations?

Making the Complex Simple

These kinds of workflow complexities in your data are absolutely essential to capture as early in the design process as possible. What if we find that “Languages spoken” is very important to patients, but not currently available in our information set? That requires additional editorial work – and likely a staff-wide survey – that could take weeks to complete simply due to the coordination involved. It is also worth mentioning the impact on initial design choices as well. For example, do we need to consider fonts that have text alternates for language glyphs? Does the design still hold up (spacing, line length, relationship to imagery etc) when there is twice as much French text as English?

Since we’re working directly with Marketing to define our audience and purpose of each page, we should understand how each element of the design improves the overall user experience. That knowledge allows the entire team to make informed decisions about the level of effort to produce and maintain each content element.

All members of the team should have a familiarity and respect for the concerns of other members of the team. When developing and planning content, it is imperative to involve all four teams as early in the process as possible. To bring your content into focus, always ask the following questions about any design or content element shown in a wireframe or mockup:

  • What content or data will be needed to produce this element?
  • Does this content or data already exist in a usable format?
  • What format will this data be entered and stored in?
  • Will this element be editorially curated or automatically produced?
    • If automated, do we have business logic to support that automation?
    • If curated, do we have the staff time to support that creation and maintenance?

Building a robust content model and workflow is a team effort. The functionality of the CMS and the designs they are capable of producing is what brings the Editorial, Marketing, Digital and IT teams together. Giving them the visibility into each other's work streams allows them to collaborate. This collaboration also gives the various team members collective ownership over the content experiences within their organizations.

We want to make your project a success.

Let's Chat.
Categories: Drupal

Pages

Subscribe to As If Productions aggregator - Drupal