Drupal

Droptica: It’s been a year with Droopler!

Planet Drupal - 20 December 2018 - 1:00am
Soon, we will be celebrating the first anniversary of the day Droopler – our open Drupal 8 distribution – was released. It is a perfect time for some summaries and plans for the future. In this article, I’m going to show you how Droopler works and what awaits its users in the upcoming release.
Categories: Drupal

Bulk Entity Action

New Drupal Modules - 19 December 2018 - 10:11pm

Under Development.

Categories: Drupal

Token Debug

New Drupal Modules - 19 December 2018 - 9:01pm

Adds an admin page that helps debug tokens.

Categories: Drupal

Views CSS Style

New Drupal Modules - 19 December 2018 - 8:37pm

Provides a views style plugin that renders inline css to head area.

Categories: Drupal

Jeff Geerling's Blog: Deploying an Acquia BLT Drupal 8 site to Kubernetes

Planet Drupal - 19 December 2018 - 2:28pm

Wait... what? If you're reading the title of this post, and are familiar with Acquia BLT, you might be wondering:

  • Why are you using Acquia BLT with a project that's not running in Acquia Cloud?
  • You can deploy a project built with Acquia BLT to Kubernetes?
  • Don't you, like, have to use Docker instead of Drupal VM? And aren't you [Jeff Geerling] the maintainer of Drupal VM?

Well, the answers are pretty simple:

Categories: Drupal

jsSHA

New Drupal Modules - 19 December 2018 - 2:14pm

Provides Drupal integration with the jsSHA library, a JavaScript implementation of the complete Secure Hash Standard family as well as HMAC.

Categories: Drupal

Aten Design Group: GraphQL with Drupal: Getting Started

Planet Drupal - 19 December 2018 - 10:29am

Decoupling Drupal is a popular topic these days. We’ve recently posted about connecting Drupal with Gatsby, a subject that continues to circulate around the Aten office. There are a number of great reasons to treat your CMS as an API. You can leverage the content modeling powers of Drupal and pull that content into your static site, your javascript application, or even a mobile app. But how to get started?

In this post I will first go over some basics about GraphQL and how it compares to REST. Next, I will explain how to install the GraphQL module on your Drupal site and how to use the GraphiQL explorer to begin writing queries. Feel free to skip the intro if you just need to know how to install the module and get started.

A Brief Introduction to GraphQL

Drupal is deep in development on an API First Initiative, and the core team is working on getting json:api into core. This exposes Drupal's content via a consistent, standardized solution which has many advantages and responds to REST requests.

Recently the JavaScript community has become enamored with GraphQL, a language for querying databases which is touted as an alternative to REST for communicating with an API.

Developed by Facebook, GraphQL is now used across the web from the latest API of Github to the New York Times redesign.

GraphQL opens up APIs in a way that traditional REST endpoints cannot. Rather than exposing individual resources with fixed data structures and links between resources, GraphQL gives developers a way to request any selection of data they need. Multiple resources on the server side can be queried at once on the client side, combining different pieces of data into one query and making the job of the front-end developer easier.

Why is GraphQL Good for Drupal?

GraphQL is an excellent fit for Drupal sites, which are made up of entities that have data stored as fields. Some of these fields could store relationships to other entities. For example, an article could have an author field which links to a user.

The Limitations of REST

Using a REST API with that example, you might query for “Articles”. This returns a list of article content including an author user id. But to get that author’s content you might need to do a follow-up query per user ID to get that author’s info, then stitch together that article with the parts of the author you care about. You may have only wanted the article title, link and the author name and email. But if the API is not well designed this could require several calls to the server which returned way more info that you wanted. Perhaps including the article publish date, it’s uuid, maybe the full content text as well. This problem of “overfetching” and “underfetching” is not an endemic fault with all REST based APIs. It’s worth mentioning that json:api has its own solutions for this specific example, using sparse fieldsets and includes.

Streamlining with GraphQL

With GraphQL, your query can request just the fields needed from the Article. Because of this flexibility, you craft the query as you want it, listing exactly the fields you need (Example: the title and URL, then it traverses the relationship to the user, grabbing the name and email address). It also makes it simple to restructure the object you want back; starting with the author then getting a reverse reference to Articles. Just by rewriting the query you can change the display from an article teaser to a user with a list of their articles.

Either of these queries can be written, fields may be added or removed from the result, and all of this without writing any code on the backend or any custom controllers.

This is all made possible by the GraphQL module, which exposes every entity in Drupal from pages to users to custom data defined in modules, as a GraphQL schema.

Installing GraphQL for Drupal

If you want to get started with GraphQL and Drupal, the process requires little configuration.

  1. Install the module with Composer, since it depends on a vendor library GraphQL-php If you're using a Composer based Drupal install use the command:
    composer require drupal/graphql
    to install the module and its dependencies.
  2. Enable the module; it will generate a GraphQL schema for your site which you can immediately explore.
Example Queries with GraphiQL

Now that you have GraphQL installed, what can you do? How do you begin to write queries to explore your site’s content? One of the most compelling tools built around GraphQL is the explorer, called GraphiQL. This is included in the installation of the Drupal GraphQL module. Visit it at:

/graphql/explorer

The page is divided into left and right sides. At the left you can write queries. Running a query with the button in the top left will display the response on the right pane.

Write a basic query on the left side, hit the play button to see the results on the right.

As you write a query, GraphiQL will try to autocomplete to help you along.

As you type, GraphiQL will try to autocomplete With entities, you can hit play to have it fill in all the default properites.

You can also dive into the live documentation in the far right pane. You'll see queries for your content types, the syntax for selecting fields as well as options for filtering or sorting.

Since the schema is self documenting, you can explore the options available in your site.

The documentation here uses autocomplete as well. You can type the name of an entity or content type to see what options are available.

Add additional filter conditions to your query.

Filters are condition groups, in the above example I am filtering by the "article" content type.

In the previous example I am just getting generic properties of all nodes, like entityLabel. However, if I am filtering by the "Article" type, I would want access to fields specific to Articles. By defining those fields in a "fragment", I can substitute the fragment right into my query in place of those individual defaults.

Use fragments to set bundle specific fields.

Because my author field is an entity reference, you'll see the syntax is similar to the nodes above. Start with entities, then list the fields on that entity you want to display. This would be an opportunity to use another fragment.

Now that the query is displaying results how I want, I can add another filter to show different content. In this case; a list of unpublished content.

Add another filter to see different results.

Instead of showing a list of articles with their user, I could rearrange this query to get all the articles for a given user.

Display reverse references with the same fragment.

I can reuse the same fragment to get the Article exactly as I had before, or edit that fragment to remove just the user info. The nodeQuery just changes to a userById which takes an id similar to how the nodeQuery can take a filter. Notice the reverseFieldAuthorNode. This allows us to get any content that references the user.

Up Next: Building a Simple GraphQL App

If you’re new to GraphQL, spend a little time learning how the query language works by practicing in the GraphiQL Explorer. In the next part of this post I will go over some more query examples, write a simple app with create-react-app and apollo, and explain how GraphQL can create and update content by writing a mutation plugin.

Categories: Drupal

Lullabot: A Toolset For Enterprise Content Inventories

Planet Drupal - 19 December 2018 - 10:06am

Earlier this year, Lullabot began a four-month-long content strategy engagement for the state of Georgia. The project would involve coming up with a migration plan from Drupal 7 to Drupal 8 for 85 of their state agency sites, with an eye towards a future where content can be more freely and accurately shared between sites. Our first step was to get a handle of all the content on their existing sites. How much content were we dealing with? How is it organized? What does it contain? In other words, we needed a content inventory. Each of these 85 sites was its own individual install of Drupal, with the largest containing almost 10K unique URLs, so this one was going to be a doozy. We hadn't really done a content strategy project of this scale before, and our existing toolset wasn't going to cut it, so I started doing some research to see what other tools might work. 

Open up any number of content strategy blogs and you will find an endless supply of articles explaining why content inventories are important, and templates for storing said content inventories. What you will find a distinct lack of is the how: how does the data get from your website to the spreadsheet for review? For smaller sites, manually compiling this data is reasonably straightforward, but once you get past a couple hundred pages, this is no longer realistic. In past Drupal projects, we have been able to use a dump of the routing table as a great starting point, but with 85 sites even this would be unmanageable. We quickly realized we were probably looking at a spider of some sort. What we needed was something that met the following criteria:

  • Flexible: We needed the ability to scan multiple domains into a single collection of URLs, as well as the ability to include and exclude URLs that met specific criteria. Additionally, we knew that there would be times when we might want to just grab a specific subset of information, be it by domain, site section, etc. We honestly weren't completely sure what all might come in handy, so we wanted some assurance that we would be able to flexibly get what we needed as the project moved forward.
  • Scalable: We are looking at hundreds of thousands of URLs across almost a hundred domains, and we knew we were almost certainly going to have to run it multiple times. A platform that charged per-URL was not going to cut it.
  • Repeatable: We knew this was going to be a learning process, and, as such, we were going to need to be able to run a scan, check it, and iterate. Any configuration should be saveable and cloneable, ideally in a format suitable for version control which would allow us to track our changes over time and more easily determine which changes influenced the scan in what ways. In a truly ideal scenario, it would be scriptable and able to be run from the command line.
  • Analysis: We wanted to be able to run a bulk analysis on the site’s content to find things like reading level, sentiment, and reading time. 

Some of the first tools I found were hosted solutions like Content Analysis Tool and DynoMapper. The problem is that these tools charge on a per-URL basis, and weren't going to have the level of repeatability and customization we needed. (This is not to say that these aren't fine tools, they just weren't what we were looking for in terms of this project.) We then began to architect our own tool, but we really didn't want to add the baggage of writing it onto an already hectic schedule. Thankfully, we were able to avoid that, and in the process discovered an incredibly rich set of tools for creating content inventories which have very quickly become an absolutely essential part of our toolkit. They are:

  • Screaming Frog SEO Spider: An incredibly flexible spidering application. 
  • URL Profiler: A content analysis tool which integrates well with the CSVs generated by Screaming Frog.
  • GoCSV: A robust command line tool created with the sole purpose of manipulating very large CSVs very quickly.

Let's look at each of these elements in greater detail, and see how they ended up fitting into the project.

Screaming Frog undefined

Screaming Frog is an SEO consulting company based in the UK. They also produce the Screaming Frog SEO Spider, an application which is available for both Mac and Windows. The SEO Spider has all the flexibility and configurability you would expect from such an application. You can very carefully control what you do and don’t crawl, and there are a number of ways to report the results of your crawl and export it to CSVs for further processing. I don’t intend to cover the product in depth. Instead, I’d like to focus on the elements which made it particularly useful for us.

Repeatability

A key feature in Screaming Frog is the ability to save both the results of a session and its configuration for future use. The results are important to save because Screaming Frog generates a lot of data, and you don’t necessarily know which slice of it you will need at any given time. Having the ability to reload the results and analyze them further is a huge benefit. Saving the configuration is key because it means that you can re-run the spider with the exact same configuration you used before, meaning your new results will be comparable to your last ones. 

Additionally, the newest version of the software allows you to run scans using a specific configuration from the command-line, opening up a wealth of possibilities for scripted and scheduled scans. This is a game-changer for situations like ours, where we might want to run a scan repeatedly across a number of specific properties, or set our clients up with the ability to automatically get a new scan every month or quarter.

Extraction undefined

As we explored what we wanted to get out of these scans, we realized that it would be really nice to be able to identify some Drupal-specific information (NID, content type) along with the more generic data you would normally get out of a spider. Originally, we had thought we would have to link the results of the scan back to Drupal’s menu table in order to extract that information. However, Screaming Frog offers the ability to extract information out of the HTML in a page based on XPath queries. Most standard Drupal themes include information about the node inside the CSS classes they create. For instance, here is a fairly standard Drupal body tag.

<body class="html not-front not-logged-in no-sidebars page-node page-node- page-node-68 node-type-basic-page">

As you can see, this class contains both the node’s ID and its content type, which means we were able to extract this data and include it in the results of our scan. The more we used this functionality, the more uses we found for it. For instance, it is often useful to be able to identify pages with problematic HTML early on in a project so you can get a handle on problems that are going to come up during migration. We were able to do things like count the number of times a tag was used within the content area, allowing us to identify pages with inline CSS or JavaScript which would have to be dealt with later.

We’ve only begun to scratch the surface of what we can do with this XPath extraction capability, and future projects will certainly see us dive into it more deeply. 

Analytics undefined

Another set of data you can bring into your scan is associated with information from Google Analytics. Once you authenticate through Screaming Frog, it will allow you to choose what properties and views you wish to retrieve, as well as what individual metrics to report within your result set. There is an enormous number of metrics available, from basics like PageViews and BounceRate to extended reporting on conversions, transactions, and ad clicks. Bringing this analytics information to bear during a content audit is the key to identifying which content is performing and why. Screaming Frog also has the ability to integrate with Google Search Console and SEO tools like Majestic, Ahrefs, and Moz.

Cost

Finally, Screaming Frog provides a straightforward yearly license fee with no upcharges based on the number of URLs scanned. This is not to say it is cheap, the cost is around $200 a year, but having it be predictable without worrying about how much we used it was key to making this part of the project work. 

URL Profiler undefined

The second piece of this puzzle is URL Profiler. Screaming Frog scans your sites and catalogs their URLs and metadata. URL Profiler analyzes the content which lives at these URLs and provides you with extended information about them. This is as simple as importing a CSV of URLs, choosing your options, and clicking Run. Once the run is done, you get back a spreadsheet which combines your original CSV with the data URL Profiler has put together. As you can see, it provides an extensive number of integrations, many of them SEO-focused. Many of these require extended subscriptions to be useful, however, the software itself provides a set of content quality metrics by checking the Readability box. These include

  • Reading Time
  • 10 most frequently used words on the page
  • Sentiment analysis (positive, negative, or neutral)
  • Dale-Chall reading ease score
  • Flesh-Kincaid reading ease score
  • Gunning-Fog estimation of years of education needed to understand the text
  • SMOG Index estimation of years of education needed to understand the text

While these algorithms need to be taken with a grain of salt, they provide very useful guidelines for the readability of your content, and in aggregate can be really useful as a broad overview of how you should improve. For instance, we were able to take this content and create graphs that ranked state agencies from least to most complex text, as well as by average read time. We could then take read time and compare it to "Time on Page" from Google Analytics to show whether or not people were actually reading those long pages. 

On the downside, URL Profiler isn't scriptable from the command-line the way Screaming Frog is. It is also more expensive, requiring a monthly subscription of around $40 a month rather than a single yearly fee. Nevertheless, it is an extremely useful tool which has earned a permanent place in our toolbox. 

GoCSV​

One of the first things we noticed when we ran Screaming Frog on the Georgia state agency sites was that they had a lot of PDFs. In fact, they had more PDFs than they had HTML pages. We really needed an easy way to strip those rows out of the CSVs before we ran them through URL Profiler because URL Profiler won’t analyze downloadable files like PDFs or Word documents. We also had other things we wanted to be able to do. For instance, we saw some utility in being able to split the scan out into separate CSVs by content type, or state agency, or response code, or who knows what else! Once again I started architecting a tool to generate these sets of data, and once again it turned out I didn't have to.

GoCSV is an open source command-line tool that was created with the sole purpose of performantly manipulating large CSVs. The documentation goes into these options in great detail, but one of the most useful functions we found was a filter that allows you to generate a new subset of data based on the values in one of the CSV’s cells. This allowed us to create extensive shell scripts to generate a wide variety of data sets from the single monolithic scan of all the state agencies in a repeatable and predictable way. Every time we did a new scan of all the sites, we could, with just a few keystrokes, generate a whole new set of CSVs which broke this data into subsets that were just documents and just HTML, and then for each of those subsets, break them down further by domain, content type, response code, and pre-defined verticals. This script would run in under 60 seconds, despite the fact that the complete CSV had over 150,000 rows. 

Another use case we found for GoCSV was to create pre-formatted spreadsheets for content audits. These large-scale inventories are useful, but when it comes to digging in and doing a content audit, there’s just way more information than is needed. There were also a variety of columns that we wanted to add for things like workflow tracking and keep/kill/combine decisions which weren't present in the original CSVs. Once again, we were able to create a shell script which allowed us to take the CSVs by domain and generate new versions that contained only the information we needed and added the new columns we wanted. 

What It Got Us

Having put this toolset together, we were able to get some really valuable insights into the content we were dealing with. For instance, by having an easy way to separate the downloadable documents from HTML pages, and then even further break those results down by agency, we were able to produce a chart which showed the agencies that relied particularly heavily on PDFs. This is really useful information to have as Georgia’s Digital Services team guides these agencies through their content audits. 

undefined

One of the things that URL Profiler brought into play was the number of words on every page in a site. Here again, we were able to take this information, cut out the downloadable documents, and take an average across just the HTML pages for each domain. This showed us which agencies tended to cram more content into single pages rather than spreading it around into more focused ones. This is also useful information to have on hand during a content audit because it indicates that you may want to prioritize figuring out how to split up content for these specific agencies.

undefined

Finally, after running our scans, I noticed that for some agencies, the amount of published content they had in Drupal was much higher than what our scan had found. We were able to put together the two sets of data and figure out that some agencies had been simply removing links to old content like events or job postings, but never archiving it or removing it. These stranded nodes were still available to the public and indexed by Google, but contained woefully outdated information. Without spidering the site, we may not have found this problem until much later in the process. 

Looking Forward

Using Screaming Frog, URL Profiler, and GoCSV in combination, we were able to put together a pipeline for generating large-scale content inventories that was repeatable and predictable. This was a huge boon not just for the State of Georgia and other clients, but also for Lullabot itself as we embark on our own website re-design and content strategy. Amazingly enough, we just scratched the surface in our usage of these products and this article just scratches the surface of what we learned and implemented. Stay tuned for more articles that will dive more deeply into different aspects of what we learned, and highlight more tips and tricks that make generating inventories easier and much more useful. 

Categories: Drupal

Bulk mail

New Drupal Modules - 19 December 2018 - 9:52am

Bulk Email module provides site administrators an interface to send mass email in easy and quick way.
In order to use the modules like "Views bulk operation" or "views send" module, email addresses should be entities in drupal system but not the such case with this module.
Use case - Lets suppose, you have long list of emails in text file and you want to send email to all email addresses. In that case this module will be helpful.
How to use this module

Categories: Drupal

Migrate Process URL

New Drupal Modules - 19 December 2018 - 9:52am

A custom migration may only have a URL to import into a field. If using Drupal core's link field, you have to assign the value directly to the uri column:

field_my_website/uri: source: my_custom_url_source

The problem is, sometimes the source field will not be in the correct URL format for Drupal core. This module provides a new process plugin, field_link_generate the creates an array for use with the field_link process plugin:

Categories: Drupal

Jeff Geerling's Blog: Hosted Apache Solr now supports Drupal Search API 8.x-2.x, Solr 7.x

Planet Drupal - 19 December 2018 - 8:50am

Earlier this year, I completely revamped Hosted Apache Solr's architecture, making it more resilient, more scalable, and better able to support having different Solr versions and configurations per customer.

Today I'm happy to officially announce support for Solr 7.x (in addition to 4.x). This means that no matter what version of Drupal you're on (6, 7, or 8), and no matter what Solr module/version you use (Apache Solr Search or Search API Solr 1.x or 2.x branches), Hosted Apache Solr is optimized for your Drupal search!

Categories: Drupal

CKEditor Resize

New Drupal Modules - 19 December 2018 - 8:30am

This module integrates the resize CKEditor plugin for Drupal 8.

This plugin allows you to resize the classic editor instance by dragging the resize handle (◢)
located in the bottom right (or bottom left in the Right-to-Left mode) corner of the editor.
It can be configured to make the editor resizable only in one direction (horizontally, vertically)
or in both.

Categories: Drupal

Evolving Web: 5 Things New Drupal Site Builders Struggle With

Planet Drupal - 19 December 2018 - 8:15am

I’ve recently been researching, writing, and talking about the content editor experience in Drupal 8. However, in the back of my mind I’ve been reflecting on the site builder experience. Every developer and site builder who learns Drupal is going to use the admin UI to get their site up-and-running. What are some things site builders often struggle with in the admin UI when learning Drupal?

Blocks

For most Drupal site builders, the Block layout page is key to learning how Drupal works. However, there is more to Blocks than just the Block layout page. You can also create different types of blocks with different fields in Drupal 8.

Site builders new to Drupal don’t usually stumble across the Block Types page on their own. In fact, I think a lot of site builders don’t know about block types at all. Probably because "Block Types" is not listed in the in the 2nd level of the administration menu under “Structure”, but instead buried in the third level of the menu.

Similarly, site builders might never find the “Custom block library” page for creating block content. Depending on how blocks are being used on a particular site, this page might be more logically nested under “Content”.

Many users never find the “Demonstrate block regions” link, a really key page for anyone learning how Drupal works and what regions are. Most Drupal site builders who see this page for the first time are delighted, so making this link more prominent might be an easy way to improve the experience for site builders.

Appearance

Typically, a Drupal site has two themes: the default/front-end theme and the admin/back-end theme. The appearance page doesn’t make this clear. Some site builders learning Drupal end up enabling an admin theme on the front-end or a front-end theme for the admin UI. I think the term "default theme" is confusing for new users. And making a consistent UI for setting a theme as the default theme or the admin theme would be a nice improvement.

Install vs. Download

The difference between installing and downloading a module is not laid out clearly. If someone is trying Drupal for the first time, they’ll likely use the UI to try and install modules, rather than do it through the command line. In the UI, they see the link to “Install New Module”. Once this is done, it seems like the module should be installed. Even though they have the links available to “Enable newly installed modules”, they might not read these options carefully. I think re-labelling the initial link to "Download New Module" might help here.

Most users are also confused about how to uninstall a module. They don’t know why they can’t uncheck a checkbox on the "Extend" page. Providing a more visible link to the uninstall page from installed modules might help with this.

Configuration Management

The UI for configuration management is pretty hidden in Drupal 8. In practice, configuration management is something we typically do via the command line, this is how most seasoned Drupalers would import/export configuration. However, for someone learning how Drupal 8 works, they’re going to be learning initially from the UI. And at the moment, site builders are virtually unaware of Configuration Management and how it affects their work.

Having some kind of simple reminder in the UI to show site builders the status of their configuration could go a long way to them understanding the configuration management workflow and that they should be using it.

The Admin Toolbar

Everyone loves the admin toolbar module. Once it’s installed, site builders are happy and ask “Why isn’t this part of Drupal core?”

But, for a certain set of people, it’s not clear that the top-level of this navigation is clickable. The top-level pages for “Configuration” and “Structure” are index pages that we don’t normally visit. But the “Content” page provides the content listing, and the “Extend” page shows use all our modules. These are obviously key pages. Imagine trying to learn Drupal if you don’t realize you can click on these pages for the first week. But users who are used to not being able to click top-level elements might simply miss these pages. Does anyone know a good way to signal that these are clickable?

What's Next?

I would love to hear how you think we should improve the admin UI for site builders and if you have any thoughts on my suggestions. 

One thing that I'm very excited about that's already happening is a new design to modernize the look and feel of the Admin UI in Drupal. This will go a long way to making Drupal seem more comfortable and easy to use for everyone, content editors and site builders alike. You can see the new designs here.

+ more awesome articles by Evolving Web
Categories: Drupal

TEN7 Blog's Drupal Posts: Episode 049: Jeff Robbins

Planet Drupal - 19 December 2018 - 7:48am
In this episode, Ivan is joined by his friend Jeff Robbins, entrepreneur, co-founder of Lullabot, founder of Yonder, an executive coach, an author, a signed recording artist and a self-proclaimed philosopher Here's what we're discussing in this podcast: Discovering the meaning of a fortnight; Jeff's background; Boston's universities tour; Starting at O'Reilly Media; Gopher & open source software; Orbit, one of the first bands on the internet; Signing with A&M Records; Surviving record labels; Intellectual property, algorithms and paradigms; The complexity of band management; On the Lollapalooza tour with Snoop Doggy Dogg; The unintentional result of subliminal intentionality; 123 Astronaut; The transition from music to Drupal management; Lullabot, one of the first fully distributed companies; DrupalCon, Vancouver 2006, let the hiring begin Yonder, a traveler's guide to distribution; The future of corporate distribution; Writing a new book; One Minute Manager, Make Friends and Influence People and Tribal Leadership.
Categories: Drupal

TFA GA TOTP

New Drupal Modules - 19 December 2018 - 4:05am

This module fix the problem of Expired Token usage of Google Authenticator in TFA login. It will match the current code showing in app to the entered one and once the code will be expired in Google Authenticator APP, the code will be declined from the Drupal.

Categories: Drupal

Commerce PartPay

New Drupal Modules - 18 December 2018 - 7:23pm
Commerce PartPay

Commerce PartPay integrates PartPay with Drupal Commerce payment and checkout system.

Features

Commerce PartPay supports gateway hosted billing methods.

Available Payment Methods

PartPay
PartPay is a payment solution hosted by PartPay; your site won’t see credit card details so
you have fewer requirements for compliance. It’s safer and simpler to implement.

Categories: Drupal

TID to Name

New Drupal Modules - 18 December 2018 - 2:59pm

This module is a Twig extension that converts a term ID (TID) to a language-aware taxonomy term name.

Categories: Drupal

Quicklink

New Drupal Modules - 18 December 2018 - 1:18pm
Drupal implementation of Google Chrome Lab's Quicklink library How it works

Quicklink attempts to make navigations to subsequent pages load faster. It:

Categories: Drupal

pgwslider

New Drupal Modules - 18 December 2018 - 12:04pm
This plugin is SEO compliant, so the statement of the slideshow elements is only via a HTML tag. If you wish, you can add one or more sliders on a same page. Samples Demo
Features
  • Fully responsive slider
  • Less than 4 KB (minified and gzipped)
  • All browsers supported (desktop and mobile devices)
  • SEO compliant
Categories: Drupal

pgwslideshow

New Drupal Modules - 18 December 2018 - 12:02pm
  • Fully responsive slideshow
  • Desktop and Mobile devices supported
  • Less than 4 KB (minified and gzipped)
  • Full customization with the CSS file included
Categories: Drupal

Pages

Subscribe to As If Productions aggregator - Drupal