Drupal

Cumul.io filters

New Drupal Modules - 1 August 2019 - 7:34am

Extension to the existing https://www.drupal.org/project/cumulio

Allows filters to be applied to Cumul.io dashboards, by enabling applications to leverage Cumul.io's API.

Allows modules to alter the configuration properties of a dashboard before they are embedded.

See https://developer.cumul.io/#add-filters for more info.

Categories: Drupal

JSON:API Server Push

New Drupal Modules - 1 August 2019 - 7:26am

Experimental module that adds HTTP/2 server push capabilities to JSON:API. Not tested for production use.

This module enables the client to send requests with a serverPush query parameter. The query parameter can be requested with and without a value, i.e. the following queries are both valid:

?fields[node--article]=title&serverPush
?fields[node--article]=title&serverPush=uid

Categories: Drupal

Agaric Collective: Drupal migrations: Understanding the ETL process

Planet Drupal - 1 August 2019 - 7:12am

The Migrate API is a very flexible and powerful system that allows you to collect data from different locations and store them in Drupal. It is in fact a full-blown extract, transform, and load (ETL) framework. For instance, it could produce CSV files. Its primarily use, thought, is to create Drupal content entities: nodes, users, files, comments, etc. The API is thoroughly documented and their maintainers are very active in the #migration slack channel for those needing assistance. The use cases for the Migrate API are numerous and vary greatly. Today we are starting a blog post series that will cover different migrate concepts so that you can apply them to your particular project.

Understanding the ETL process

Extract, transform, and load (ETL) is a procedure where data is collected from multiple sources, processed according to business needs, and its result stored for later use. This paradigm is not specific to Drupal. Books and frameworks abound on the topic. Let’s try to understand the general idea by following a real life analogy: baking bread. To make some bread you need to obtain various ingredients: wheat flour, salt, yeast, etc. (extracting). Then, you need to combine them in a process that involves mixing and baking (transforming). Finally, when the bread is ready you put it into shelves for display in the bakery (loading). In Drupal, each step is performed by a Migrate plugin:

The extract step is provided by source plugins.
The transform step is provided by process plugins.
The load step is provided by destination plugins.

As it is the case with other systems, Drupal core offers some base functionality which can be extended by contributed modules or custom code. Out of the box, Drupal can connect to SQL databases including previous versions of Drupal. There are contributed modules to read from CSV files, XML documents, JSON and SOAP feeds, WordPress sites, LibreOffice Calc and Microsoft Office Excel files, Google Sheets, and much more.

The list of core process plugins is impressive. You can concatenate strings, explode or implode arrays, format dates, encode URLs, look up already migrated data, among other transform operations. Migrate Plus offers more process plugins for DOM manipulation, string replacement, transliteration, etc.

Drupal core provides destination plugins for content and configuration entities. Most of the time targets are content entities like nodes, users, taxonomy terms, comments, files, etc. It is also possible to import configuration entities like field and content type definitions. This is often used when upgrading sites from Drupal 6 or 7 to Drupal 8. Via a combination of source, process, and destination plugins it is possible to write Commerce Product Variations, Paragraphs, and more.

Technical note: The Migrate API defines another plugin type: `id_map`. They are used to map source IDs to destination IDs. This allows the system to keep track of records that have been imported and roll them back if needed.

Drupal migrations: a two step process

Performing a Drupal migration is a two step process: writing the migration definitions and executing them. Migration definitions are written in YAML format. These files contain information about the how to fetch data from the source, how to process the data, and how to store it in the destination. It is important to note that each migration file can only specify one source and one destination. That is, you cannot read form a CSV file and a JSON feed using the same migration definition file. Similarly, you cannot write to nodes and users from the same file. However, you can use as many process plugins as needed to convert your data from the format defined in the source to the format expected in the destination.

A typical migration project consists of several migration definition files. Although not required, it is recommended to write one migration file per entity bundle. If you are migrating nodes, that means writing one migration file per content type. The reason is that different content types will have different field configurations. It is easier to write and manage migrations when the destination is homogeneous. In this case, a single content type will have the same fields for all the elements to process in a particular migration.Once all the migration definitions have been written, you need to execute the migrations. The most common way to do this is using the Migrate Tools module which provide Drush commands and a user interface (UI) to run migrations. Note that the UI for running migrations only detect those that have been defined as configuration entities using the Migrate Plus module. This is a topic we will cover in the future. For now, we are going to stick to Drupal core’s mechanisms of defining migrations. Contributed modules like Migrate Scheduler, Migrate Manifest, and Migrate Run offer alternatives for executing migrations.

 

This blog post series is made possible thanks to these generous sponsors. Contact Understand DRupal if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: Drupal

Hello World! (REST)

New Drupal Modules - 1 August 2019 - 6:36am

Hello World!
Drupal 8 Restful API module example.

Categories: Drupal

GoCardless payment

New Drupal Modules - 1 August 2019 - 2:21am

This module implements a GoCardless payment method for the payment module.

Categories: Drupal

Media: JoVe

New Drupal Modules - 1 August 2019 - 1:42am

Media: JoVe adds JoVe as a supported media provider.

Categories: Drupal

CTI Digital: CTI Digital at Drupal Camp Pannonia 2019

Planet Drupal - 1 August 2019 - 12:30am

We’re thrilled to be attending Drupal Camp Pannonia from the 1st to 3rd August!

Categories: Drupal

Advanced cookiebar

New Drupal Modules - 31 July 2019 - 11:50pm
Categories: Drupal

Focal Point S3fs Cache

New Drupal Modules - 31 July 2019 - 10:56pm

Focal Point S3fs Cache

  • Utility module supporting image cache invalidate functionality with s3fs and focal point
  • Flush s3 image cache when focal point is changed
  • Allow to select different style through configuration menu
Categories: Drupal

User Extras

New Drupal Modules - 31 July 2019 - 2:41pm

User Extras is a module that provides extra functionalities to the user registration flow. It allows to create users using only the email field and also assign roles automatically.

Current Features

Categories: Drupal

Smbclient

New Drupal Modules - 31 July 2019 - 7:45am

The module provides tools for using samba shares. Allows create multiple samba server settings.

Dependencies

The module depends on SMB Library.

Example $manager = \Drupal::service('smbclient.server_manager'); $fs = $manager->getServer('test_filestore_id'); $share = $fs->getShare('test_folder'); $file = $share->readFile('test_file.xml'); $content = stream_get_contents($file);

Development assistance is welcome.

Categories: Drupal

Taxonomy term generator

New Drupal Modules - 31 July 2019 - 5:02am

This module will allow you to generate multiple terms on a vocabulary that
you can use for sample or also on production.

INSTALLATION:

  1. Install and enable this module.
  2. Go to admin/structure/taxonomy-term-generator.
  3. Select a vocabulary that you want to add some terms.
  4. Input how many terms do you want to add.
  5. Click "Generate terms".
Categories: Drupal

Drupal Atlanta Medium Publication: Drupal. The Next Generation. They Are Out There.

Planet Drupal - 31 July 2019 - 4:57am
Approaching 20 years old, the Drupal Community must prioritize recruiting the next generation of Drupal ProfessionalsFerris Wheel in Centennial Olympic Park in Atlanta, Georgia

Time flies when you are having fun. One of those sayings I remember my parents saying that turned out to be quite true. My first Drupal experience was nearly 10 years ago and within a blink of an eye, we have seen enormous organizations adopt and commit to Drupal such as Turner, the Weather Channel, The Grammys, and Georgia.gov.

Throughout the years, I have been very fortunate to meet a lot of Drupal community members in person but one thing I have noticed lately is that nearly everyone’s usernames can be anywhere between 10–15 years old. What does that mean? As my dad would say, it means we are getting O — L — D, old.

For any thriving community, family business, organization, or your even favorite band for that matter, all of these entities must think about succession planning. Succession what, what is succession planning?

Succession planning is a process for identifying and developing new leaders who can replace old leaders when they leave, retire or die. -Wikipedia

That's right, we need to start planning a process for identifying who can take over in leadership roles that continue to push Drupal forward. If we intend to keep Drupal as a viable solution for large and small enterprises, then we should market ourselves to the talent pool as a viable career option to help in an attempt to lure talent to our community.

There are many different way’s to promote our community and develop new leaders. Mentorship. Mentorship helps ease the barrier for entry into our community by providing guidance into how our community operates. Our community does have some great efforts taking place in the form of mentoring such as Drupal Diversity & Inclusion (DDI) initiative, the core mentoring initiative and of course the code and mentoring sprints at DrupalCon and DrupalCamps. These efforts are awesome and should be recognized as part of a larger strategic initiative to recruit the next generation of Drupal professionals.

Companies spend billions of dollars a year in recruiting but as an open-source community, we don’t have billions so

… what else can we do to attract new Drupal career professionals?The Drupal Career Summit

This year’s Atlanta Drupal Users’s Group (ADUG) decided to develop the Drupal Career Summit, all in an effort to recruit more professionals into the Drupal community. Participants will explore career opportunities, career development, and how open source solutions are changing the way we buy, build, and use technology.

  • Learn about job opportunities and training.
  • Hear how local leaders progressed through their careers and the change open source creates their clients and business.
  • Connect one-on-one with professionals in the career you want and learn about their progression, opportunities, challenges, and wins.
When and Where Is it?

On Saturday, September 14 from 1pm -4:30pm. Hilton Garden Inn Atlanta-Buckhead 3342 Peachtree Rd., NE | Atlanta, GA 30326 | LEARN MORE

Who Should Attend?

Student and job seekers can attend for FREE! The Summit will allow you to meet with potential employers and industry leaders. We’ll begin the summit with a panel of marketers, developers, designers, and managers that have extensive experience in the tech industry, and more specifically, the Drupal community. You’ll get a chance to learn about career opportunities and connect with peers with similar interests.

REGISTERAre You Hiring?

We’re looking for companies that want to hire and educate. You can get involved with the summit by becoming a sponsor for DrupalCamp Atlanta. Sponsors of the event will have the opportunity to engage with potential candidates through sponsored discussion tables and branded booths. With your sponsorship, you’ll get a booth, a discussion table, and 2 passes! At your booth, you’ll get plenty of foot traffic and a fantastic chance to network with attendees.

BECOME A SPONSORWhat you can do?

If you can’t physically attend our first Career Summit, you can still donate to our fundraising goals. And if you are not in the position to donate invite your employer, friends, and colleagues to participate. Drupal Career Summit.

Drupal. The Next Generation. They Are Out There. was originally published in Drupal Atlanta on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categories: Drupal

Spinning Code: Bypass Pantheon Timeouts for Drupal 8

Planet Drupal - 31 July 2019 - 4:30am

Pantheon is an excellent hosting service for both Drupal and WordPress sites. But to make their platform work and scale well they have set a number of limits built into the platform, these include process time limits and memory limits that are large enough for the vast majority of projects, but from time to time run you into trouble on large jobs.

For data loading and updates their official answer is typically to copy the database to another server, run your job there, and copy the database back onto their server. That’s fine if you can afford to freeze updates to your production site, setup a process to mirror changes into your temporary copy, or some other project overhead that can be limiting and challenging. But sometimes that’s not an option, or the data load takes too long for that to be practical on a regular basis.

I recently needed to do a very large import for records into a Drupal database and so started to play around with solutions that would allow me to ignore those time limits. We were looking at needing to do about 50 million data writes and the running time was initially over a week to complete the job.

Since Drupal’s batch system was created to solve this exact problem it seemed like a good place to start. For this solution you need a file you can load and parse in segments, like a CSV file, which you can read one line at a time. It does not have to represent the final state, you can use this to actually load data if the process is quick, or you can serialize each record into a table or a queue job to actually process later.

One quick note about the code samples, I wrote these based on the service-based approach outlined in my post about batch services and the batch service module I discussed there. It could be adapted to a more traditional batch job, but I like the clarity the wrapper provides for breaking this back down for discussion.

The general concept here is that we upload the file and then progressively process it from within a batch job. The code samples below provide two classes to achieve this, first is a form that provides a managed file field which create a file entity that can be reliably passed to the batch processor. From there the batch service takes over and using a bit of basic PHP file handling to load the file into a database table. If you need to do more than load the data into the database directly (say create complex entities or other tasks) you can set up a second phase to run through the values to do that heavier lifting. 

To get us started the form includes this managed file:

$form['file'] = [ '#type' => 'managed_file', '#name' => 'data_file', '#title' => $this->t('Data file'), '#description' => $this->t('CSV format for this example.'), '#upload_location' => 'private://example_pantheon_loader_data/', '#upload_validators' => [ 'file_validate_extensions' => ['csv'], ], ];

The managed file form element automagically gives you a file entity, and the value in the form state is the id of that entity. This file will be temporary and have no references once the process is complete and so depending on your site setup the file will eventually be purged. Which all means we can pass all the values straight through to our batch processor:

$batch = $this->dataLoaderBatchService->generateBatchJob($form_state->getValues());

When the data file is small enough, a few thousand rows at most, you can load them all right away without the need of a batch job. But that runs into both time and memory concerns and the whole point of this is to avoid those. With this approach we can ignore those and we’re only limited by Pantheon’s upload file size. If they file size is too large you can upload the file via sftp and read directly from there, so while this is an easy way to load the file you have other options.

As we setup the file for processing in the batch job, we really need the file path not the ID. The main reason to use the managed file is they can reliably get the file path on a Pantheon server without us really needing to know anything about where they have things stashed. Since we’re about to use generic PHP functions for file processing we need to know that path reliably:

$fid = array_pop($data['file']); $fileEntity = File::load($fid); $ops = []; if (empty($fileEntity)) { $this->logger->error('Unable to load file data for processing.'); return []; } $filePath = $this->fileSystem->realpath($fileEntity->getFileUri()); $ops = ['processData' => [$filePath]];

Now we have a file and since it’s a csv we can load a few rows at time, process them, and then start again.

Our batch processing function needs to track two things in addition to the file: the header values and the current file position. So in the first pass we initialize the position to zero and then load the first row as the header. For every pass after that we need to find point we left off. For this we use generic PHP files for loading and seeking the current location:

// Old-school file handling. $path = array_pop($data); $file = fopen($path, "r"); ... fseek($file, $filePos); // Each pass we process 100 lines, if you have to do something complex // you might want to reduce the run. for ($i = 0; $i < 100; $i++) { $row = fgetcsv($file); if (!empty($row)) { $data = array_combine($header, $row); $member['timestamp'] = time(); $rowData = [ 'col_one' => $data['field_name'], 'data' => serialize($data), 'timestamp' => time(), ]; $row_id = $this->database->insert('example_pantheon_loader_tracker') ->fields($rowData) ->execute(); // If you're setting up for a queue you include something like this. // $queue = $this->queueFactory->get(‘example_pantheon_loader_remap’); // $queue->createItem($row_id); } else { break; } } $filePos = (float) ftell($file); $context['finished'] = $filePos / filesize($path);

The example code just dumps this all into a database table. This can be useful as a raw data loader if you need to add a large data set to an existing site that’s used for reference data or something similar.  It can also be used as the base to create more complex objects. The example code includes comments about generating a queue worker that could then run over time on cron or as another batch job; the Queue UI module provides a simple interface to run those on a batch job.

I’ve run this process for several hours at a stretch.  Pantheon does have issues with systems errors if left to run a batch job for extreme runs (I ran into problems on some runs after 6-8 hours of run time), so a prep into the database followed by running on queue or something else easier to restart has been more reliable.

View the code on Gist.
Categories: Drupal

OPTASY: Best of July: Top 5 Drupal Blog Posts that We Have Bookmarked this Month

Planet Drupal - 31 July 2019 - 12:45am
Best of July: Top 5 Drupal Blog Posts that We Have Bookmarked this Month adriana.cacoveanu Wed, 07/31/2019 - 07:45

It's that time of the month again! The time when we express our thanks to those Drupal teams who've generously (and altruistically) shared valuable free content with us, the rest of the community. Content with an impact on our own workflow and our Drupal development process. In this respect, as usual, we've handpicked 5 Drupal blog posts, from all those that we've bookmarked this month, that we've found most “enlightening”.

From:
 

Categories: Drupal

Scheduled Delete

New Drupal Modules - 30 July 2019 - 11:27pm

Schedule delete - Alternate Utility Module

  1. This module provide option to scheduled node, Node deleted via drupal cron.
  2. This module will add a column to your node_field_data table so please take backup of database before installing/uninstalling this module.
Categories: Drupal

Palantir: The Wisconsin Department of Employee Trust Funds

Planet Drupal - 30 July 2019 - 3:24pm

Re-platforming the Wisconsin Department of Employee Trust Funds website from HTML templates to Drupal 8, including a complete visual and user experience redesign.

etf.wi.gov An Effortless Customer Experience for Wisconsin Government Employees, Retirees, and Employers On

The Wisconsin Department of Employee Trust Funds (ETF) is responsible for managing the pension and other retirement benefits of over 630,000 state and local government employees and retirees in the state of Wisconsin. With approximately $98 billion in assets, the Wisconsin Retirement System is the 8th largest U.S. public pension fund and the 25th largest public or private pension fund in the world.

In addition to overseeing the Wisconsin Retirement System, ETF also provides benefits such as group health insurance, disability benefits, and group life insurance. Over a half-million people rely on the information ETF provides in order to make decisions that impact not only themselves, but their children, and their children’s children.

The Challenge

Given the extent of services provided by ETF, their website is large and complex with a wealth of vital information. When ETF first approached Palantir.net, their website (shown) hadn't been updated since the early 2000s, and it lacked a content management system. The site wasn’t accessible, it wasn’t responsive, and its overall user experience needed a drastic overhaul. 

Simply put, Wisconsin government employees could not easily navigate the crucial information they need to understand and make decisions about the benefits that they earn.

The new ETF site should lower customer effort across all touchpoints. An effortless customer experience: One stop, fewer clicks and increased satisfaction.

ETF's vision
Key Outcomes

ETF engaged Palantir to re-platform their website from HTML templates to the Drupal 8 content management system, including a complete visual and user experience redesign. After partnering with Palantir, the new user-friendly ETF site:

  • Provides ETF staff and all website visitors with a seamless experience
  • Allows Wisconsin government employees, retirees, and employers to efficiently access accurate and current information
  • Incorporates best practices for content publication for the ETF digital team
Upgrade to the Employee Benefits Explorer

One of the most notable features of the new site is ETF’s Benefits Explorer.

An important function of the ETF site is offering information regarding pension, health insurance, and other benefits. On ETF’s old site, employees were required to select cumbersome and confusing benefit options before they could find detailed information about their benefits. This task was made even more difficult by the fact that the names of benefit options are not descriptive or intuitive. ETF’s previous solution was to send physical mailers with distinctive photos on the covers, and then direct visitors to the website to select the benefit options that had the same image as their mailer.

Palantir and ETF knew that the “find my benefits” user experience needed a complete overhaul. In our initial onsite, we identified a potential solution: creating a database of employers and the benefit options they offer. With this list we built a benefits explorer tool that allows ETF’s customers to search for benefits by the one piece of information they will always definitely have: the name of their employer.

With the new explorer experience, site visitors begin by typing in the name of their employer and are immediately provided with their benefit options. We built two versions of the tool: one for the specific task of identifying health plan options, which are typically decided once a year during open enrollment, and one for identifying all benefits offered by your employer, which can be used any time of year.

The new “My Benefits” explorer is now the second most visited page on the new ETF site, which shows just how helpful this new feature is for ETF’s customers.

How We Did It

In order to transform the ETF website into an effortless experience for Wisconsin government employees, retirees, and employers, there were five critical elements to consider.

Our approach involved:

  1. Identifying “Top Tasks”
  2. Revising content
  3. User-testing the information architecture
  4. Crafting an accessible, responsive design
  5. Replatforming on a robust Content Management System: Drupal

Identifying “Top Tasks”

The biggest negative factor of the previous ETF site’s user experience was its confusing menus. The site presented too many options and pathways for people to find information such as which health insurance plan they belong to or how to apply for retirement benefits, and the pathways often led to different pages about the same topic. Frequently, people would give up or call customer support, which is only open during typical business hours.

Palantir knew the redesign would have the most impact if the site was restructured to fit the needs of ETF’s customers. In order to guarantee we were addressing customers’ most important needs, we used the Top Task Identification methodology developed by customer experience researcher and advocate Gerry McGovern.

Through the use of this method, we organized ETF’s content by the tasks site users deemed most important, with multiple paths to get to content through their homepage, site and organic search, and related content.

Revising Content

Our goal was to make content findable by both internal and external search engines. No matter what page a visitor enters the site on, the page should make sense and the visitor should be able to find their way to more information.

While the Palantir team was completing the Top Tasks analysis, the internal ETF team revised the website content by focusing on:

  • Plain language: ETF had convened a “Plain Language” initiative before engaging with Palantir, and that team was dedicated to transforming the tone of ETF’s content from stiff, formal legalese to a friendlier, more accessible tone.
  • “Bite, snack, meal” content writing strategy: The ETF team used this strategy to “chunk” their content for different levels of user engagement. A bite is a headline with a message, a snack is a short summary of the main points, and a meal is a deep dive into the details.
  • Improving metadata for accessibility and search: Palantir provided guidance on standardizing metadata, particularly for ETF’s PDF library. The ETF content team made sure that all their content had appropriate metadata.

User-tested Information Architecture (IA)

Once we had the results from our Top Tasks study, we worked toward designing a menu organized intuitively for customers. In our initial onsite, we conducted an open card sort with about 40 ETF stakeholders. Our goal was to have the ETF site content experts experiment with ways to improve the labelling, grouping, and organization of the content on their site.

We divided the stakeholders into six teams of five and gave them a set of 50 cards featuring representative content on their site. The ideas our teams generated fell largely into two groups:

  • Audience-oriented: content is organized by the role/title/person who uses it. In this approach, main menu terms might include Retiree, Member, and Employer. This approach was how ETF had content organized on their site at the time of the exercise.
  • Task-oriented: content is organized by the type of task the content relates to. In this approach, main menu terms might include Benefits, Retirement, and Member Education.

When we came back together as a group, our team of 40 stakeholders agreed that exploring a task-based information architecture would be worthwhile, but there was significant concern that switching away from an audience-based IA would confuse their customers.

Since making the site easy to use was one of our top project goals, our teams agreed to a rigorous IA testing approach. We agreed to conduct two rounds of tree tests on both an audience-oriented and task-oriented IA, and conduct three additional rounds of tests to refine the chosen approach.

Ultimately, our tests showed that the most intuitive way to organize the content for ETF’s range of customers was to organize by task, with one significant exception: Employers. ETF serves the human resources teams of all state of Wisconsin employers, and those users had completely separate content requirements from those of government employees and retirees.

Responsive Design System on Drupal

Because the former ETF site was significantly outdated, it completely lacked a content model, and the site itself was a massive hodgepodge of design elements. Palantir identified key ETF user flows, matched the content to each flow, and then abstracted out templates to serve each flow.

The overarching goal of this design system is to create intuitive, repeatable user flows through the website. These patterns enable visitors to quickly scan for the information they need, and make quick decisions about where to go next.

Accessibility and responsiveness are core requirements for ETF. Palantir used the a11y checklist from the very beginning of our design process, and continuously tested our visual designs for font size, color contrast, and general spacing of elements to ensure that accessibility was built into our design rather than retrofitted at the end.

We also conducted usability tests with real users, which helped us make accessibility improvements that accessibility checkers missed. In addition, the new design system is also fully responsive, which enables ETF’s customers to access the site from any device.

Robust Content Management

In addition to the efficiencies gained for site visitors, the new Drupal 8 site streamlines the content management process for the internal ETF site admin team. Previously, content creation happened across people and departments with minimal editorial guidelines. Once the copy was approved internally, the new content was handed to the webmaster for inclusion on the site. The process for updating content frequently took weeks.

With their new Drupal 8 platform, the ETF team has moved to a distributed authorship workflow, underpinned by the Workbench suite of modules which allows for editorial/publishing permissions. Now, ETF subject matter experts can keep their own content up to date while following established standards and best practices. The decision to publish content is still owned by a smaller group, to ensure that only approved and correct content is uploaded to the live site.

The Results

With the fully responsive design system in place, the new ETF site offers a significantly upgraded user experience for their customers:

  • Task-oriented: Our data-based top tasks approach ensured that we kept the focus on the user’s journey through the site. Everything from the menus to the page-level strategy to the visual design was geared towards making it effortless for visitors to achieve their most important tasks.
  • Structured content: Not only has the website’s content been rewritten to be more scannable for readers, but it’s also now structured for SEO. Our initial user research uncovered search as one of the most frustrating aspects of the site: “The main thing for me is really the search results: the most up to date version is never the first thing that turns up” By adding metadata to ETF’s library of PDF forms and transforming their content from freeform text to structured data, ETF’s search experience has made a complete turnaround.
  • User testing: Our strategy and design work was validated throughout the engagement with real site users, which kept us all grounded in the outcomes.
  • Accessible and responsive design: The design system isn’t just WCAG A.A compliant according to accessibility testing software - we worked with users to ensure that the site delivers a good experience with site readers. Incorporating a11y standards from the very beginning of the design process ensured that accessibility was baked into our design rather than a last-minute add on.

Palantir created a task-based navigation and content organization to support the customer journey, which is contributing to a better user experience. The new site is more personalized and engaging for customers.

Mark Lamkins

Director, Office of Communications

Categories: Drupal

Kubernetes

New Drupal Modules - 30 July 2019 - 12:56pm
Overview
  • Kubernetes is a subsidiary module under Cloud module.
  • Please refer to: Cloud

Sponsor

Categories: Drupal

Horizontal Integration: Testing Acquia Lift & Content Hub locally with ngrok

Planet Drupal - 30 July 2019 - 12:34pm
Testing integrations with external systems can sometimes prove tricky. Services like Acquia Lift & Content Hub need to make connections back to your server in order to pull content. Testing this requires that your environment be publicly accessible, which often precludes testing on your local development environment. Enter ngrok As mentioned in Acquia's documentation, ngrok can be used to facilitate local development with Content Hub. Once you install ngrok on your development environment, you'll be able to use the ngrok client to connect and create an instant, secure URL to your local development environment that will allow traffic to connect…
Categories: Drupal

Pages

Subscribe to As If Productions aggregator - Drupal