Planet Drupal

Subscribe to Planet Drupal feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 19 hours 28 min ago

Chromatic: Getting Ready for the Drupal Global Contribution Weekend

23 January 2019 - 8:26am

This coming weekend is the Drupal Global Contribution Weekend where small local contribution groups volunteer their Drupal development at the same time throughout the world. This year there are local groups gathering in Canada, England, Germany, India, Russia, Spain, and the United States.

Categories: Drupal

TIP Solutions: Hotellinx ERP system to Drupal Commerce integration

23 January 2019 - 4:24am
The task was to integrate an ERP system (more specific Hotel management system called Hotellinx) which is (and should always stay or be) a single source of truth (SSOT) for the products they have and administer. If you want to sell these products from another system like a webshop you need to transfer the data there to showcase your goods to customers. Another way of explaining that is to say the webshop is only to make a purchase but all the data is stored to the ERP system or SSOT. Planet Drupal Drupal 8 hotellinx Integration Commerce
Categories: Drupal

ADCI Solutions: Getting started REST API with Symfony 4

23 January 2019 - 2:24am

We fell in love with Symfony back then when its components were added to Drupal 8. We keep on learning and teaching how to use this powerful PHP framework for projects with complicated business logic.
In this Symfony 4 tutorial, we will create a basic server back-end structure for your application using the REST API architecture style. 

Sounds fun? You may find the article here.  

 

Categories: Drupal

OSTraining: How to Rewrite the Output of Views Fields in Drupal

23 January 2019 - 2:06am

One of our customers asked how to tweak the fields of a table output by Views to give the table a cleaner look.

They were looking for a way to merge the fields of the first and second columns. They also wanted to display the file download link just with an icon.

There are a couple of ways to achieve this. One of them is to rewrite the output of Views’ fields.

This tutorial will explain how to rewrite the results of any Views’ field. We can rewrite the results, regardless of what display the view is using (table, list, grid, etc).

Categories: Drupal

Virtuoso Performance: Announcing the Soong project - developing a general-purpose ETL framework

22 January 2019 - 12:10pm
Announcing the Soong project - developing a general-purpose ETL framework mikeryan Tuesday, January 22, 2019 - 02:10pm

I'd like to invite members of the open-source community, particularly (but not exclusively) those involved with PHP, to join in designing and developing a general-purpose ETL framework for data migration. The vendor name for packaging components of this project is soong, and git repos for existing components are under the GitLab account "soongetl".

Note: Finally having finished composition of this lengthy monologue, it's clear to me that it's very ambitious (some might say arrogant) of me to write with the expectation that this will grow into a large and robust open-source ecosystem. Very well - it is ambitious, and the effort may very well fall flat on its face. C'est la vie...

Who am I?

I'm Mike Ryan - a lot of people in the Drupal community know me, but not so much the wider open-source community. Almost eleven years ago at a Drupal meetup in Boston, amongst general agreement that everyone hates to do data migration, Moshe Weitzman looked across the table at me and said "there's an opportunity here." Since then data migration into Drupal has been the primary focus of my professional life, first in partnership with Moshe, then as an Acquia employee and finally as a solo consultant. Over the years I've created several migration-related contrib modules for Drupal, was part of the team integrating migration into Drupal core for D8, and have been involved in dozens of real-world migration projects.

Why am I doing this? I think we can do better

Just within Drupal, the migration framework can be improved:

  1. Each step of Drupal migration support has been a port of the previous - from the hook-based Drupal 6 version, to the inheritence-and-composition model in Drupal 7, to the plugin-based system in Drupal 8, technical debt has accumulated. I've wanted for a while to start over with a clean slate - given my experience (and others), what would we do differently starting from scratch? Can we step back and re-examine the assumptions we've been carrying forward?
  2. At a specific technical level, the biggest itch I've wanted to scratch is decoupling the components. Within the migration system as it is in D8 today, pretty much every component knows everything about every other component. At one point we had a destination plugin which was using some of the migration's source plugin configuration - that one made my eye twitch!
  3. There's also the coupling of the migration system with Drupal - in particular, migration classes *are* Drupal plugins (i.e., their interfaces extend PluginInspectionInterface) rather than being *managed by* Drupal plugins. I would like to see migration classes be all about migration, rather than worry about being plugins as well. And once the basic migration classes are no longer Drupal plugins, then it's a small step to them being entirely independent of Drupal...
The larger PHP community

With Drupal 8, we’ve often talked about “getting off the island” in terms of benefiting from much fine PHP work done outside of the Drupal community. We haven’t talked so much about going in the opposite direction - making our own fine work available for use beyond Drupal. To my knowledge, the only published example of this so far is Kris Vanderwater (EclipseGC) with the plugin library.

Likewise, we Drupal developers don’t have a monopoly on good migration ideas - by moving the general-purpose aspects of migration into a separate open source project, we have the opportunity to benefit from new ideas and new talent.

The community we build

The major key to success for any large open-source project like this is a thriving community. After seeing open-source projects like Drupal grow organically - and face growing pains as they find themselves dealing with community problems reactively rather than proactively - if a community does form around this project, I would like to establish a supportive and welcoming tone from the beginning.

Diversity in particular remains an issue in the tech industry in general, and open-source especially - and a lack of diversity is difficult to correct after the fact. In building a community around this framework, my hope is that we draw a diverse set of developers in the beginning, in the hopes that seeding the garden well will be, if not self-sustaining, at least more sustainable. How to do that, I'm not certain - a concerted outreach effort could easily end up looking like Pokemon Go, searching for unique creatures to collect. Apart from starting with a good Code of Conduct, I'm open to suggestions!

Another aspect of community-building is providing opportunities for relative novices (whether new to open-source development, new to PHP, or new to migration). The proposed architecture involves myriad small, well-focused packages - an extractor here, a set of related transformers there, integrations for specific frameworks and APIs... Individual transformers, in particular, will generally be very simple. This ecosystem thus will provide ample opportunities for novices to gain experience with mentorship and also establish an online presence.

Now, all that being said, what about The Ethics of Unpaid Labor and the OSS Community (also see the recent Twitter discussion in the Drupal community)? In reaching out to underrepresented groups and to novices, we are reaching out to the people who have the least ability to work on open source for free. One way to ameliorate this effect may be to explicitly try to draw in students - whether in formal programs or teaching themselves software development - who will benefit from some free practical education and mentorship. Down the road, if this framework does start being adopted in real-world applications, we can look at ways to get sponsorships for people who maintain projects within the ecosystem. At any rate, as the community here grows I expect this will be an ongoing conversation.

Selfishness

Yes, I'm willing to cop to selfish reasons to pursue this.

  1. Simple ego: I'm proud of the work I've done on migration in Drupal, and think it can be useful on a larger stage. Being old enough to see retirement on the horizon, I admit I'm thinking of this as my magnum opus - the last major contribution I make to open source. I would love to leave behind a significant piece of quality software with a vital community behind it.
  2. Money: I've done fine as a Drupal data migration specialist. I hope to do better by expanding my market beyond Drupal, working on a wider variety of migration projects. Yes, retirement is on the horizon but, given earlier attempts at consulting which went less well than my "migration period" has, my funds put that horizon farther out than I'd like...
What's done so far?

Early last year I started playing around with a proof-of-concept in a single repo, getting a single basic ETL migration scenario running with a decoupled class structure based on the basic architecture of the Drupal migration system. Much of the work after getting the initial POC running was figuring out appropriate boundaries between components, and gradually introducing features beyond the most basic ones I started with. And then breaking pieces out into separate source repos, and figuring out those boundaries.

My role

This will certainly change according to the number and skills of contributors who join into this effort (assuming there are some!), but what I'm aiming for in terms of my own role:

  1. Primary architect of version 1 of Soong. This would mean being the primary maintainer of architecture documentation and the repository of central interfaces/base classes. Per "selfishness" above - I have an architectural vision I want to see brought to fruition. Others may take it in different directions after that, but V1 is mine! tl;dr - I don't want to be BDFL; I do want to be BDF1.
  2. Community leader. Per "community" above, I have a vision for building a diverse and vibrant open-source community from the ground up. Unlike the technical architecture, however, this plays less to my strengths, so I will be happy to defer as better-suited people show leadership in the community.
  3. Mentorship. I'd like to help people up their development skills, their open-source involvement, and their understanding of the pits and perils of data migration.
Why did it take me so long?

After having it in the back of my head for a few years, I finally started creating repos and putting my thoughts into actual interfaces and classes several months ago. Why did I wait until now to share my work with the larger community? I certainly felt seen when I read this:

https://twitter.com/jessfraz/status/1063425181509652481

Frankly, there's an element of imposter syndrome here - I wanted to be sure I wasn't exposing any dumb ideas! Well, enough of that - instead, I now stipulate that you will find dumb things I did here, and ask that you help smartify them.

The architecture itself

There's a ways to go documenting the architecture as it is currently implemented in soong/soong, but right now it broadly looks like this:

  1. A Task accepts configuration defining a migration process, and implements operations - most notably migrate, but it may also support other operations like rollback, status, analyze, … The following steps describe the migrate operation.
  2. The task constructs the configured Extractor, which obtains data from a source such as a SQL query, a CSV file, an XML/JSON API, etc.
  3. Iterating over the extractor returns one DataRecord (collection of named DataProperty instances) at a time containing source data. The task creates an empty DataRecord representing the destination data.
  4. The task configuration defines a transform pipeline keyed by destination property names. For each of these properties, a sequence of one or more Transformer classes with corresponding configuration is invoked to determine the destination property value - usually, the first one will be configured to accept one or more source property names, and the results will be fed to subsequent transformers, with the final result assigned to the named property in the destination DataRecord.
  5. The destination DataRecord is passed to the configured Loader to be loaded into the destination store - a SQL database, a CSV file, etc.
  6. If an optional KeyMap is configured within the task, it is used to store the mapping from the source record's unique key to the destination record's unique key. This enables keyed relationships to be maintained even if keys change when migrating, as well as enabling rollback.

To try out a couple of working demos, git clone git@gitlab.com:soongetl/poc.git and follow the README.

Initial technical priorities
  1. One of those infamous hard problems in computer science is naming things. Before we go too far, let's figure out how best to name things - I think Extractor/Transformer/Loader are pretty solid, but let's discuss whether other components (like Task) could use better names. Also, let's decide what naming conventions for implementations should look like - e.g., should CSV extractor and loader classes both be named CSV (or for that matter, Csv) with namespaces alone distinguishing them, or should they be CSVExtractor and CSVLoader?
  2. The initial architecture, as I've said before, comes from my narrow experience in Drupal. I'm sure there are plenty of other good migration ideas out there - maybe there's even a package I've missed that's good enough that this effort would better be directed towards improving it rather than starting from scratch. I did do some research last year and did not find any PHP ETL packages that appeared to have wide adoption or as much flexibility, but with more eyes on it (eyes that have seen more beyond Drupal than I have) let's see if we can do a thorough review of prior art and see if there are some good ideas which may influence this effort. And let's look beyond PHP as well - are there ETL frameworks written in other object-oriented languages which may provide some architectural inspiration?
  3. Review the boilerplate for Soong code repos (based on https://github.com/thephpleague/skeleton) - let's go over what we've got there (especially the code of conduct and contributing guidelines).
  4. Test all the things! Before adding new stuff, we need to add tests for the existing components, and set up automated testing on Gitlab.
Technical goals
  1. For V1, require PHP 7.1 and leverage strict type checking. I expect future versions to require PHP 7.4 and leverage typed object properties.
  2. The central interface package soong/soong ideally should not depend on anything other than PSR interfaces. It should be approached as if it were a PSR itself - a completely general interface for ETL functionality not dependent on any non-standard interfaces.
Conclusion

Again, I know I am getting way ahead of myself here by imagining an active open-source community will quickly spring up here. I have talked to Drupal people about my ideas on occasion, and I expect there will be some interest there, but I very much hope other open-source developers can join this effort and provide different perspectives. I do believe strongly that a standard ETL library with a core of simple standard interfaces (making a simple move-my-stuff-from-here-to-there application a breeze) plus the flexibility to build complex systems to handle many types of data will be extremely valuable across many domains.

If I may try your patience a bit longer - I've spent a substantial portion of my time since my last contract pulling these thoughts together, and I am now in need of paid work (contact me if you need some data migration done!). I may fantasize about being sponsored to work fulltime on Soong, or be hopeful there's someone with a project that they think will benefit from Soong and thus I can make progress here in the course of solving their migration problem. Realistically, my next contract (or employment) most likely will not involve Soong development, so once I'm working I won't have as much time to manage this project - let's hope plenty of people join in to pick up my slack!

If you've made it this far, thank you for your time and I look forward to your merge requests!

Tags Drupal Planet Drupal PHP Migration Use the Twitter thread below to comment on this post:

https://t.co/0uVJ13t8md

— Virtuoso Performance (@VirtPerformance) January 22, 2019
Categories: Drupal

Manifesto: Grow as a Drupal developer: embrace the community!

22 January 2019 - 6:24am

So far in our series of posts about helping you become a better Drupal developer,  we’ve talked a lot about contribution from an individual point of view. But Drupal is community first! Even if you believe you’ve already reached your potential as a developer, remember people before you said and subscribed to the motto:  Come. Continue reading...

The post Grow as a Drupal developer: embrace the community! appeared first on Manifesto.

Categories: Drupal

ComputerMinds.co.uk: Review driven development?

22 January 2019 - 3:06am

We've heard of test-driven development, behaviour-driven development, feature-driven development and someone has probably invented buzzword-driven development by now. Here's my own new buzzword phrase: review-driven development. At ComputerMinds, we aim to put our work through peer reviews to ensure quality and to share knowledge around the team. Chris has recently written about why and how we review our work. We took some time on our last team 'CMDay' to discuss how we could make doing peer reviews better. We found ourselves answering this question:

Why is reviewing hard? How can we make it easier?

We had recently run into a couple of specific scenarios that had triggered our discussion. For one, pressure to complete the work had meant reviews were rushed or incomplete. The other scenario had involved such a large set of code changes that reviewing them all at the end was almost impossible. I'm glad of the opportunity to reflect on our practice. Here are some of the points we have come away with - please add your own thoughts in the comments section below.

1. Coders, help your reviewers

The person that does the development work is the ideal person to make a review easy. The description field of a pull request can be used to write a summary of changes, and to show where the reviewer should start. They can provide links back to the ticket(s) in the project's issue tracking system (e.g. Redmine/Jira), and maybe copy across any relevant acceptance criteria. The coder can chase a colleague to get a review, and then chase them up to continue discussions, as it is inevitable that reviewers will have questions.

2. Small reviews are easier

Complicated changes may be just as daunting to review as to build. So break them up into smaller chunks that can be reviewed easier. This has the massive benefit of forcing a developer to really understand what they're doing. A divide & conquer approach can make for a better implementation and is often easier to maintain too, so the benefits aren't only felt by reviewers.

3. Review early & often

Some changes can get pretty big over time. They may not be easy to break up into separate chunks, but work on them could well be broken up into iterations, building on top of each other. Early iterations may be full of holes or @TODO comments, but they still reveal much about the developer's intentions & understanding. So the review process can start as early as the planning stage, even when there's no code to actually review. Then as the changes to code take shape, the developer can continually return to the same person every so often. They will have contextual knowledge growing as the changes grow, to understand what's going on, helping them provide a better review.

4. Anyone can review

Inevitably some colleagues are more experienced than others - but we believe reviews are best shared around. Whether talking about your own code, or understanding someone else's code, experience is spread across the team. Fresh eyes are sometimes all that's needed to spot issues. Other times, it's merely the act of putting your own work up for scrutiny that forces you to get things right.

5. Reviewers, be proactive!

Developers like to be working, not waiting for feedback. Once they've got someone to agree to review their work, they have probably moved onto solving their next problem. However well they may have written up their work, it's best for the reviewer to chase the developer and talk through the work, ideally face-to-face. Even if the reviewer then goes away to test the changes, or there's another delay, it's best for the reviewer to be as proactive as possible. Clarify as much as needed. Chase down the answers. Ask seemingly dumb questions. Especially if you trust the developer - that probably means you can learn something from them too!

6. Use the tools well

Some code changes can be ignored or skipped through easily. Things like the boilerplate code around features exports in Drupal 7, or changes to composer.lock files. Pointers from the developer to the reviewer of what files/changes are important are really helpful. Reviewers themselves can also get wise as to what things to focus on. Tools can help with this - hiding whitespace changes in diffs, the files tab of PRs on github, or three-way merge tools, for example. Screenshots or videos are essential for communicating between developer & reviewer about visual elements when they can't meet face-to-face.

7. What can we learn from drupal.org?

The patch-based workflow that we are forced to use on drupal.org doesn't get a lot of good press. (I'm super excited for the gitlab integration that will change this!) But it has stood the test of time. There are lessons we can draw from our time spent in its issue queues and contributing patches to core and contrib projects. For example, patches often go through two types of review, which I'd call 'focussed nitpicks' and the wider 'approach critiques'. It can be too tempting to write code to only fulfil precise acceptance criteria, or to pass tests - but reviewers are humans, each with their own perspectives to anticipate. Aiming for helpful reviews can be even more useful for all involved in the long-run than merely aiming to resolve a ticket.

8. Enforcing reviews

We tailor our workflow for each client and project - different amounts of testing, project management and process are appropriate for each one. So 'review-driven development' isn't a strict policy to be enforced, but a way of thinking about our work. When it is helpful, we use Github's functionality to protect branches and require reviews or merges via pull requests. This helps us to transparently deliver quality code. We also find this workflow particularly handy because we can broadcast notifications in Slack of new pull requests or merges that will trigger automatic deployments.

What holds you back from doing reviews? What makes a review easier?

I've only touched on some the things we've discussed and there's bound to be even more that we haven't thought of. Let us know what you do to improve peer reviewing in the comments!

Categories: Drupal

Agiledrop.com Blog: Interview with Gabriele Maira of Manifesto Digital

21 January 2019 - 11:46pm

Meet Gabriele Maira, also known as Gabi by friends and as Gambry by the Drupal community. With over 15 years of experience working with PHP and over 10 working with Drupal, Gabriele is currently the PHP/Drupal Practice lead at the London-based Manifesto Digital. Read about his beginnings with open source and why he thinks every Drupal developer should attend a Sprint at least once in their life.

READ MORE
Categories: Drupal

OpenSense Labs: Extract advanced Solr features with Drupal

21 January 2019 - 11:05pm
Extract advanced Solr features with Drupal Shankar Tue, 01/22/2019 - 12:35

Instances of advanced search can be seen through different things. Take space research for instance. In their perpetual effort to explore Universe and search for Earth-like planets or a star that is similar to our Sun, Scientists wind up discovering interesting things. NASA’s Hubble Space Telescope was used to spot a star called Icarus, named after Greek mythological figure, which is the most distant star ever viewed and is located halfway across the universe.


On the other side of the spectrum, Apache Solr is making huge strides with its advanced search capabilities. Enterprise-level search is a quintessential necessity for your online presence to be able to thrive in this digital age. Big organisations like Apple, Bloomberg, Marketo, Roku among others are opting for Apache Solr for its advanced search features. Amalgamation of Drupal and Apache Solr can be a remarkable solution for a magnificent digital presence.

Planting the seed in 2004

John Thuma, in one of his blog posts, stated that tracing the history of Apache Solr would take us back to the year 2004 when it was an in-house project at CNET Networks used for offering search functionalities for the company’s website. It, then, donated it to the Apache Software Foundation in 2006.

Later, the Apache Lucene and Solr projects combined in 2010 with Solr becoming a sub-project of Lucene. It has witnessed an awful lot of alterations since then and is now a very significant component in the market.

Uncloaking Apache Solr


As an enterprise-capable, open source search platform, Apache Solr is based on the Apache Lucene search library and is one of most widely deployed search platforms in the world.

Solr is a standalone enterprise search server with a REST-like API. You put documents in it (called "indexing") via JSON, XML, CSV or binary over HTTP. You query it via HTTP GET and receive JSON, XML, CSV or binary results. - Lucene.apache.org

It is written in Java and offers both a RESTful XML interface and a JSON API that enables the development of search applications. Its perpetual development by an enormous community of open source committers under the direction of the Apache Software Foundation has been a great boost.

Apache Solr is often debated alongside Elasticsearch. There is even a dedicated website called solr-vs-elasticsearch that compares both of them on various parameters. It states that both the solutions have support for integration with open source content management system like Drupal. It depends upon your organisation’s needs to select from either one of them.

For instance, if your team comprises a plentitude of Java programmers, or you already are using ZooKeeper and Java in your stack, you can opt for Apache Solr. On the contrary, if your team constitutes PHP/Ruby/Python/full stack programmers or you already are using Kibana/ ELK stack (Elasticsearch, Logstash, Kibana) for handling logs, you can choose Elasticsearch.

Characteristics of Apache Solr

Following are the features of Apache Solr:

Advanced search capabilities
  • Spectacular matching capabilities: Apache Solr is characterised by the advanced full-text search capabilities. It enables spectacular matching capabilities comprising of phrases, grouping, wildcards, joins and so on, across any data type.
  • A wide array of faceting algorithms: It has the support for faceted search and filtering that enables you to slice and dice your data as needed.
  • Location-based search: It offers out-of-the-box geospatial search functionalities.
  • Multi-tenant architecture: It offers multiple search indices that streamlines the process of segregating content and users.
  • Suggestions while querying: There is support for auto-complete while searching (typeahead search), spell checking and many more.
Scalability

It is optimised for the colossal spike in traffic. Also, Solr is built on Apache Zookeeper which makes it easy to scale up or down. It has in-built support for replication, distribution, rebalancing and fault tolerance.

Support for standards-based open interfaces and data formats

It uses the standards-based open interfaces like XML, JSON and HTTP. Furthermore, you do not have to waste time converting all the data to a common representation as Solr supports JSON, CSV, XML and many more out-of-the-box.

Responsive admin UI

It has the provision for an out-of-the-box admin user interface that makes it easier to administer your Solr instances.

Streamlined monitoring

Solr publishes truckload of metric data via JMX that assists you in getting more insights into your instances. Moreover, the logging is monitorable as the log files can be easily accessed from the admin interface.

Magnificent extensions

It has an extensible plugin architecture for making it simple to plugin both index and query time plugins. It also provides optional plugins for indexing rich content, detecting language, clustering search results amongst others.

Configuration management

Its flexibility and adaptability for easy configuration are top-notch. It also offers advanced configurable text analysis, that means, there is support for most of the widely spoken languages in the world and a plethora of analysis tools that makes the process of indexing and querying your content flexible.

Performance optimisation

It has been tuned to govern largest of sites and its out-of-the-box caches have fine-grained controls that assist in optimising performance.

Amazing Indexing capabilities

Solr leverages Lucene’s Near Real-Time Indexing capabilities that ensure that the user sees the content whenever he or she wants to. Also, its built-in Apache Tika simplifies the process of indexing rich content like Microsoft Word, Adobe PDF and many more.

Schema management

You can leverage Solr’s data-driven schemaless mode in the incipient stage of development and can lock it down during the time of production.

Security

Solr has robust built-in security like SSL (Secure Sockets Layer), Authentication and role-based authorisation.

Storage

Lucene’s advanced storage options like codecs, directories among others ensures that you can fine-tune your data storage needs that are applicable for your application.

Leverage Apache UIMA

Enhancement of content can be done with its advanced annotation engines. It incorporates Apache UIMA for leveraging NLP (Natural Language Processing) and other tools for your application.

Integrating Apache Solr with Drupal

Drupal’s impressive flexibility empowers digital innovation and gives the power to the users to build almost anything. It has the provision for integration of your website with Solr platform. Drupal’s Search API Solr Search module provides a Solr backend for the Drupal Search API module.

Drupal’s Search API Solr Search module provides a Solr backend for the Drupal Search API module.

To begin with, you need to have Apache Solr installed on your server. This is followed by the validation of the Solr server’s status using Terminal. It is succeeded by the installation of Search API Solr Search module using Composer.

Once the installation of Search API Solr Search module is done, the process of configuration of Solr ensues. This involves the creation of collection which is basically a logical index linked to a config set.

Source: OSTraining

Then, Drupal’s default search module is uninstalled for negating any performance issues and the Search API Solr Search module is enabled. You can, then, move on to the process of configuration of the Search API. Finally, you can test the Search API Solr Search module.

Source: OSTrainingCase study

The Rainforest Alliance (RA), which is an international non-profit organisation working towards the development of strong forests, healthy agricultural work landscapes, and burgeoning communities via creative collaboration, leveraged the power of Drupal to revamp their website with the help of a digital agency.


RA has built a repository of structured content for supporting its mission and the content is primarily exhibited as long-form text with a huge variety of metadata and assets associated with each part of the content. It wanted to revamp the site and enable the discovery of new content on the site with the help of the automatic selection of related content. It also required the advanced permission features and publishing workflows.

Drupal was great because of its deep integrations with Apache Solr that enabled nuanced content relation engine.

Drupal turned out to be an astounding choice for fulfilling RA’s requirement of portable and searchable content. It was also great because of its deep integrations with Apache Solr that enabled nuanced content relation engine. Solr was leveraged for powering various search interfaces. Furthermore, Drupal’s wonderful content workflow features made it a perfect choice.

Solr offered ‘more like this’ (MLT) functionality that was more robust than just tagging content and showing other content with the same taxonomy terms. Search API Solr Search module, which provides a Solr backend for the Search API module, was utilised for providing the interface to govern the servers and indexes. Then, with a custom block, MLT was leveraged for assisting the process generating related content lists.

Page manager module, in combination with Layout Plugin and Panels modules, was used to build specialised landing pages in the form of specialised page manager pages with many of them having their own layouts. Different modules were utilised from within the media ecosystem of Drupal were very beneficial in administering images, embedding videos, and so on. Entity Embed, Entity Browser and Inline Entity form were magnificent for a great editorial experience for content teams.

Conclusion

Apache Solr is a great solution for enabling enterprise-level search and can make a world of difference in combination with Drupal for your digital presence.

We have been empowering our partners in their efforts digital transformation dreams with our expertise in Drupal development.

Ping us at hello@opensenselabs.com to extract advanced Solr features with Drupal.

blog banner blog image Apache Solr Drupal Drupal 8 Elasticsearch Search API Search API Solr Search Blog Type Articles Is it a good read ? On
Categories: Drupal

OpenSense Labs: Magnificent combo: Implementing Elasticsearch with Drupal

21 January 2019 - 9:28pm
Magnificent combo: Implementing Elasticsearch with Drupal Shankar Tue, 01/22/2019 - 10:58

Elasticsearch has been meritorious for The Guardian, one of the most reputed news media, by giving them the freedom to build a stupendous analytics system in-house rather than depending on a generic, off-the-shelf analytics solution. Their traditional analytics package was horrendous and was extremely sluggish consuming an enormous amount of time. The Elasticsearch-powered solution has turned out to be an enterprise-wide analytics tool and helped them understand how their content is being consumed.


Why is such a large organisation like The Guardian choosing Elasticsearch for its business workflow? Elasticsearch is all about full-text search, structured search, analytics, intricacies of confronting with human language, geolocation and relationships. Drupal, one of the leading content management frameworks, is a magnificent solution for empowering digital innovation and can help in implementing elastic search. Before we look at Drupal’s capability in implementing elastic search ecosystem, let’s unwrap Elasticsearch first.

Unlocking Elasticsearch

Elasticsearch is an open source, broadly distributable, RESTful search and analytics engine which is built on Apache Lucene. It can be accessed through an extensive and elaborate API. It enables incredibly fast searches for supporting your data discovery applications. It is used for log analytics, full-text search, security intelligence, business analytics and operational applications.

Elasticsearch is a distributed, scalable, real-time search and analytics engine - Elastic.io

It enables you to store, search and assess the voluminous amount of data swiftly and in near real-time. In general, it is leveraged as the underlying engine/technology for powering applications that have sophisticated search features and requirements.

How does Elasticsearch work? With the help of API or ingestion tools like Logstash, data is sent to Elasticsearch in the form of JSON documents. The original document is automatically stored by Elasticsearch and a searchable reference is added to the document in the cluster’s index. Elasticsearch API can, then, be utilised for searching and retrieving the document. Kibana, an open-source visualisation tool, can also be leveraged with Elasticsearch for visualising the data and create interactive dashboards.

Elasticsearch is often debated alongside Apache Solr. There is even a dedicated website called solr-vs-elasticsearch that compares both of them on various metrics. Both the solutions accompany itself with support for integration with open source content management system like Drupal. It depends upon your organisation’s needs to select from either one of them.

For instance, if your team comprises a superabundance of Java programmers, or you already are using ZooKeeper and Java in your stack, you can opt for Apache Solr. On the contrary, if your team includes PHP/Ruby/Python/full stack programmers or you already are using Kibana/ELK stack (Elasticsearch, Logstash, Kibana) for handling logs, you can choose Elasticsearch.

Merits of Elasticsearch

Following are some of the benefits of Elasticsearch:

  • Speed: Elasticsearch helps in leveraging and accessing all the data at a fast clip. Also, it makes it simple to rapidly build applications for multiple use cases.
  • Performance: Being highly distributable, it allows the processing of a colossal amount of data in parallel and swiftly finds the best matches for your search queries.
  • Scalability: Elasticsearch offers provision for easily operating at any scale without comprising on power and performance. It allows you to move from prototype to production boundlessly. It scales horizontally for governing multiple events per second while simultaneously handling the distribution of indices and queries across the cluster for efficacious operations. 
  • Integration: It comes integrated with visualisation tool Kibana. It also offers integration with Beats and Logstash for streamlining the process of transforming source data and loading it into Elasticsearch cluster.
  • Safety: It detects failures for keeping the cluster and the data safe and available. With cross-cluster replication, a secondary cluster can be leveraged as a hot backup.
  • Real-time operations: Elasticsearch operations like reading or writing data is usually performed in less than a second.
  • Flexibility: It can pliably handle application search, security analytics, metrics, logging among others.
  • Simple application development: It offers support for numerous programming languages comprising of Java, Python, PHP, JavaScript, Node.js, Ruby among others.
Elasticsearch with Drupal

For designing a full Elasticsearch ecosystem in Drupal, Elasticsearch Connector, which is a set of modules, can be utilised. It leverages the official Elasticsearch PHP library and was built with the objective of handling large sets of data at scale. It is worth noting that this module is not covered by security advisory policy.


Elasticsearch Connector module can be utilised with a Drupal 8 installation and configured so that Elasticsearch receives the content changes

Elasticsearch Connector module can be utilised with a Drupal 8 installation and configured so that Elasticsearch receives the content changes. At first, you need to download a stable release of Elasticsearch and start it. You can, then, move ahead and set up Search API. This is followed by the process of connecting Drupal to Elasticsearch with the help of Elasticsearch Connector module which involves the creation of cluster or the collection of node servers where all the data will get stored or indexed.


This is succeeded by the configuration of Search API. It offers an abstraction layer to allow Drupal to push content alterations to different servers such as Elasticsearch, Apache Solr, or any other provider that has a Search API compatible module. The indexes are created in each of those servers with the help of Search API. These indexes are like buckets where the data can be pushed and can be searched in different ways. Subsequently, indexing of content and processing of data is done.

Case Study

The website of Produce Market Guide (PMG), a resource for produce commodity information, fresh trends and data analysis, was rebuilt by OpenSense Labs. Interpolation of a JavaScript framework into the Drupal front end using progressively decoupled Drupal helps in creating a balance between the workflows of developers and content editors.


We rebuilt the website of PMG using progressively decoupled Drupal, React and Elasticsearch Connector module among others. 

To do the mapping and indexing on Elastic Server, ElasticSearch Connector and Search API modules were leveraged. The development of Elastic backend architecture was followed by the building process of the faceted search application with React and the incorporation of the app in Drupal as block or template page.

The project structure for the search was designed and developed in the sandbox with modern tools like Babel and Webpack and third-party libraries like Searchkit. Searchkit is a suite of React components that interact directly with your ElasticSearch cluster where every component is built using React and can be customised as per your needs. Searchkit was of immense help in this project. 

Logstash and Kibana, which are based on Elasticsearch, were integrated on the Elastic Server. This helped in collected, parsing, storing and visualising the data. The app in the Sandbox was built for the production and all the CSS/JS was integrated inside the Drupal as a block thereby making it a progressively decoupled feature.

Following the principles of Agile and Scrum, it resulted in a user-friendly website for PMG with a search application and loaded the search results faster.

Conclusion

The world is floating over a cornucopia of data. There is simply no end to the growth in the amount of data that is flowing through and produced by our systems. Existing technology has laid emphasis on how to store and structure warehouses replete with data.

But when it comes to making decisions in real time informed by that data, you need something like an Elasticsearch for searching and analysing data in real-time. Drupal can be a wonderful solution for implementing Elasticsearch ecosystem with its suite of modules.

We have been steadfast in our goals of empowering digital innovation with our suite of services.

Contact us at hello@opensenselabs.com to reap the rewards of Elasticsearch and ingrain your digital presence with advanced search capabilities.

blog banner blog image Elasticsearch Drupal Drupal 8 ReactJS Apache Solr Search API Elasticsearch Connector Blog Type Articles Is it a good read ? On
Categories: Drupal

Tandem's Drupal Blog: Drupal 6 LTS + PHP 7 + Platform.sh

21 January 2019 - 4:00pm
January 22, 2019 A quick and easy guide to getting Drupal 6 sites on a stable and secure platform Why are we writing about this? Drupal 6 reached end of life on February 24th, 2016. This means that your Drupal 6 site is no longer receiving security updates, all its modules are now marked unsupported, and all support requests will no longer be r...
Categories: Drupal

Wim Leers: State of JSON:API (January 2019)

21 January 2019 - 8:36am

As promised 3 months ago, Gabe, Mateu and I shipped support for revisions and file uploads today!

What happened since last month? In a nutshell:

JSON:API 2.1

JSON:API 2.1 follows two weeks after 2.0.

Work-arounds for two very common use cases are no longer necessary: decoupled UIs that are capable of previews and image uploads3.

  • File uploads work similarly to Drupal core’s file uploads in the REST module, with the exception that a simpler developer experience is available when uploading files to an entity that already exists.
  • Revision support is for now limited to retrieving the working copy of an entity using ?resourceVersion=rel:working-copy. This enables the use case we hear about the most: previewing draft Nodes. 4 Browsing all revisions is not yet possible due to missing infrastructure in Drupal core. With this, JSON:API leaps ahead of core’s REST API.

Please share your experience with using the JSON:API module!

  1. Note that usage statistics on drupal.org are an underestimation! Any site can opt out from reporting back, and composer-based installs don’t report back by default. ↩︎

  2. Which we can do thanks to the tightly managed API surface of the JSON:API module. ↩︎

  3. These were in fact the two feature requests with the highest number of followers↩︎

  4. Unfortunately only Node and Media entities are supported, since other entity types don’t have standardized revision access control. ↩︎

Categories: Drupal

Lullabot: Behind the Screens with Tom Sliker

21 January 2019 - 12:00am
Tom Sliker started Broadstreet Consulting more than a decade ago, and has made Drupal a family affair. We dragged Tom out of the South Carolina swamps and into DrupalCamp Atlanta to get the scoop. How does Tom service more than 30 clients on a monthly basis with just a staff of five people? His turn-key Aegir platform, that's how! Join the conversation at https://www.lullabot.com/podcasts/drupal-voices/278-tom-sliker
Categories: Drupal

Mark Shropshire: Announcing the 2019 Charlotte Drupal Drive-in!

20 January 2019 - 7:52pm

After three successful Charlotte Drupal Drive-in events in 2014, 2015, and 2018, the Charlotte Drupal User Group (CharDUG) is bringing it back on March 2nd, 2019. The format of the event is unconference style, allowing for a relaxed atmosphere where beginner and seasoned Drupalers alike are able discuss their projects, ideas, and ask questions.

Whether you want to discuss your projects with others, have an impromptu talk you would like to give, or a polished slide-deck presentation, you will be given the chance to pitch your idea(s). Once the pitches are made, every attendee will get to vote on the ones they find most interesting. This setup makes the event informal, the schedule fluid, and the topics dynamic. Most of all we have a lot of fun!

While Drupal is the focus, we will also welcome talks on other topics!

Suggested topics ideas:
  • Drupal 8 experimental modules and future of Drupal 9
  • React, React Native, GatsbyJS, and other JavaScript technologies
  • Decoupled Drupal
  • Contributing to open source
  • SEO and Marketing
  • Development tools (IDEs, text editors, Drush, Composer, terminal, etc.)
  • DevOps, deployments, continuous integration, etc.
  • QA and automated testing
  • Show and tell your web or application project
  • Your idea goes here!

Bring a talk idea or just come and hangout! This year’s event will be at Charlotte North Carolina’s awesome Hygge Coworking at Camp North End. Thanks to our sponsors, we will have door prizes, beverages, and snacks provided free of charge! Did I tell you the event is free? Yep, it is!

Register for the March 3rd Charlotte Drupal Drive-in today! Hope to see you there!

Thank you to our fine sponsors!

If you want to learn more about the origins of the Charlotte Drupal Drive-in and what others think of past events, check out the links below!

Blog Category: 
Categories: Drupal

OpenSense Labs: Heading towards change - Upgrading or Upcycling?

20 January 2019 - 7:25pm
Heading towards change - Upgrading or Upcycling? Vasundhra Mon, 01/21/2019 - 08:55

You’ve been living in a house for a while now and you are still fond of that place. But its no longer exactly what you need, and you are dreading for a change. 

You think of rearranging and modifying a few things. Maybe paint the room walls in vibrant colors or set up brand new furniture. And voila, it feels like home once again. 

A similar situation can be analyzed with the websites also. 

Websites are redesigned to keep up with the online trends. Sometimes it is done merely because the owners learn over time that the site design is not ideal for their purpose.


Coming back to the redesigning, people face scenarios where their house gets really old and is on the verge of collapsing. These type of situations demands them to move on to the next one exactly like the way a website owner does when he/she witnesses an end of its life cycle.

So what are these things? Is there a term for it? Is it a process? and what are the methods involved in it? 

Let’s see what exactly the terms are.

Introducing Upcycling and Upgrading 

It is true that a full rebuilt of a website is time-consuming and an expensive task just like a full rebuilt of the house is. In such situations you are only left with two options - either modify it to necessary changes or hop onto a completely new site.

If you choose the option which simply involves a refreshing appearance of the site or modifications in some parts then Upcycling is the hero for you.

To reuse the existing materials and create a product of higher value or quality than the original object or materials is termed as upcycling. 

In technical terms, Upcycling is an incremental approach to relaunch an existing website that is done to keep up with online trends, or maybe because of technical debt, or the reason where you have a well-established website and you have no plans of reconstructing it as a whole.  

Another option is to bring a website to cutting edge technology that is done for optimal results and an increase in revenues and profits. In other words Upgrading. 

Comparing Upgrading and Upcycling    Upgrading Upcycling When to do it? Can be performed when you start witnessing the end of a life cycle. Can be performed at any stage of an existing project Why to do it? Because the website contains bugs and is prone to hackers. To give the website a new appearance and improve the user experience. Rate of Investment You can see your investment as quickly as possible  The ongoing maintenance and upkeep are where you start to see your investment.  Performance Performance measurement takes several months and can be achieved with the help of different metrics Can be measured with the help of conversion rate and other metrics.  How to do it? Add more content, create multiple landing pages, go mobile friendly Allow to innovate and bring newer design versions for the relaunch of the whole site. When to do it?

Upcycling

If you have low cash but have time and some creativity in your mind you can reclaim almost anything and repurpose it. In other words, any website can be repurposed or redesigned at any stage of an existing project with existing website infrastructure. 

For a well-established web system that has been operational for several years, and doesn’t want to spend time and money to do a full rebuild, upcycling is for you. This brings improvement in editorial experience, user experience, front-end etc. 

Upcycling is also adopted due to the reason of investments. Implementation of investments is quick and easy.
 

 
Upgrading 

Computers and other electronics cannot be upcycled after they have come to an end of their life cycle, they must be upgraded to keep up with the technological advancements. Similar is the case with the websites. Websites are upgraded when it lacks functionality and speed, search engine fails to detect it or when it fails to meet lasting impressions. Upgrading a website is like a do-over of your brand.

Upgrading a brand new websites kills an enormous amount of time and leaves an organization with immense cost.

The reason why it faces such issues is evident, it is started from scratch and therefore requires deep research and analytics.

Why to do it?

Upcycling

Websites owner have plenty of reasons to transform their website. It might be due to the reason where they want a fresh appearance of their website, or because of the technical debt, or simply because they want to improve their user experience. 

Upcycling is the answer for all.

Though upcycling might de-prioritize the motto of “Big Bang Launch”, it meets one primary goal which is to reduce time to market for big website improvement.

Upgrading

Is your website prone to hackers and bugs? Is it preventing website visitors to find their way to your website’s content? - then it is time for you to upgrade it. Upgrading your website includes four main factors: Design, Marketing, Usability and Time. 

Design: It is the appearance of your website that reflects the business message which you are trying to portray. The design represents the collection of things like mobile responsiveness, compatibility with the browser, image resolutions

Marketing: Once the website design is done and the content is established, it is the time to promote it. Marketing involves all the SEO updates, measurement of effectiveness and a call-to-action (A button or link that is placed on a website to drive prospective customers to become leads by completing action on your landing page)

Usability: It is the term which describes the ease with which a particular website or a project is used. Usability helps in decreasing bounce rates and contributes to the performance. 

Time: A fresh website is a perfect way to save the admin’s time. This directly affects the customer service also. 

Rate Of Investment (ROI)

Upcycling 

ROI or rate of investment in Upcycling depends on the two factors sequentially:

  • The cost and
  • The results of a website. 

Cost is termed as the price of the website, which varies widely depending upon the budget and funds of the organization. Generally, the price of the website relates to the total time taken to create it. Factors such as project management, design, programming, team, content, CMS etc determines the cost of a website. 

Whereas the result also contributes to being an important factor in the whole journey of ROI. The website ROI may literally be negative if the site doesn’t produce the desired results which in return gives a loss in the cost. Factors affecting ROI are the cost of creating and maintaining the website, traffic, conversion rate, website lifespan etc.

When both these things, cost, and the result going overboard (and done right) it can generate so much demand that you would witness a steep increase in the website conversions. The ongoing maintenance and upkeep are where you would start to see your investment. 

Upgrading

Once you have taken a leap and invested in the reconstruction of your organization’s website it’s time to begin the calculation concerning ROI. While it is difficult to predict the precise ROI for a brand new website, it would definitely result in a host of benefits.

Benefits that might include an uptick in search engine ranking and a steady generation in the interest of potential buyers and customers. Though there are no dry methods to measure the ROI, there are different ways to track the success and determine whether you are meeting your goals or not.

Tracking the web activities, Monitoring search engines and calculating cost are some of the unique ways to do it. 
 

Performance 

Upcycling

Website owners who walk down the path of redesigning a website would agree with the fact that there’s a single most important metric that determines the success of a website: i.e Conversion Rate 

A conversion rate is the percentage of visitors to a website that completes the desired goal for the total number of visitors.

A high conversion rate is an indication of successful marketing and web design. It means people want what you are offering and that they are able to understand it. The whole point of redesigning a website is to improve the performance of it. 

Apart from conversion rate, there are certain other metrics which when optimized well would lead to better conversions and performance of your site. Metrics like:

  • Number of session per user
  • Pages per session 
  • Less bounce rate 
  • Per clicks around your website 

Upgrading

Let's assume that your organization has a great unique idea, you also proved that the demand of this must have “new product” you are selling is going to rock the market. You launched a beautiful website, and you wait for the rise in the performance graph.

Crickets. No traffic.

No traffic means no money and now you are in a dilemma that how would you mend it? Thus to improve it 4 important factors are required.

Backlinks: Backlinks are the incoming hyperlinks from one web page to another website that transfer equity to a website. Backlinks are the most significant factor that determines a site’s search ranking. The link-building strategy should be started as early as possible because it can take months for Google to update a rank. 

Content is the king: Google wants to see that your website should be dynamic and active for it to rank. Having a blog section on your website would do the task. You have to publish content frequently to give your customer a reason to return back.

Keywords: New websites have a hard time competing for top keywords, thus it is best to start with long-tail keywords. Keywords would allow you to attract the targeted audience who are looking for exactly that thing which you might be selling. 

How to do it using Drupal?

Upcycling

The Auraria Library main website is a great example of how upcycling can be achieved with the help of Drupal. The website was powered by Drupal 6, and it was beginning to see its end life. They needed an infrastructural upgrade with the highest priority. In 2016 a project team was established to kick off the journey of revamping the website. 6 months later the website was redesigned with a cloud-based, mobile friendly and localized ready content management system - Drupal 8.

Drupal 8 was chosen due to the fact that the organization enjoyed its scalability, flexibility and rapid availability with impressive community-driven service. Therefore, when a decision was to be made to select the CMS platform, they chose Drupal as their own. 

Thus by keeping longevity in mind, a swift change from Drupal 6 to Drupal 8 ( skipping Drupal 7) was made. 

Due to urgent migration, the project team implemented 80/20 law to break down the development objective into multiple releases. By doing so it not only enabled the possibility of a short first development cycle and helped in launching the website within six months but also bought them with the time to wait for highly contributed D8 modules to mature. 

User experience, increase in ROI, responsive designs, brand engagement, infrastructure improvement, and on-time delivery were the key objectives of the whole project. 

Upgrading 

Upgrading a website using Drupal involves several steps. First, you need to define the source site (make sure core modules like Migrate, Migrate Drupal, Migrate Drupal UI are enabled), then you need to review the pre-upgrade analysis. The upgrade of Drupal 7 to Drupal 8 should be done so that Drupal 8 site is empty. There might be a chance that conflicting IDs are detected, a warning about the conflicting IDs will be shown.

This warning can be dealt with two ways. One is where you ignore it and lose your data and the other one is where you abort it and take an alternative approach.

Now is the time for the upgrade. Depending upon the size and type of content the upgrade accordingly takes the time. 

Why Choose Upcycling Over Upgrading?

Now that you have read all the points and came to this section, a slight idea would be twirling around your mind on what is best for your website.

To give you a clear vision, let’s put it in simpler and accurate words.

To upgrade is to break something down and create a new product from the base material. This might sound great for consumer goods and other relatable things, but not always great for a website. Upgrading is a good choice for organizations that have the time and the money to invest in the whole construction, and also if they are starting to witness its end. For the well-established websites: not a good choice. 

The goal of upcycling is to add value without degrading the product. It is the concept of creating something new out of something old with as little effort as possible. Upcycling is the right choice for all the Drupal-based websites as it uses the features of Drupal 8 in a way that increases the conversions of the website. 

Upcycling and Decoupling 

If a website has a couple of complex backends logic that the website owner doesn't wish to rebuild but is eager to relaunch the frontend, upcycling is the solution. Relaunching the frontend as a decoupled site with Drupal themes and then integrate it with the backend makes more sense. 

Whereas if the backend really needs a major push, but you want to keep the existing frontend without the need of reconstructing it, upcycling could work for that too, after the backend is decoupled. 

Decoupling your entire architecture will enable the website to upcycle individual parts and bring great value to all the end users. It would highly contribute to things like

Infrastructure: Gets the most out of the existing content and website composition. 

User Experience: Improves user experience, design, and front-end without the need of waiting for any launch.

Investment: You are able to witness your investment as soon as possible. 

In the Nutshell

At the end of the day, we want our website to flourish in terms of performance and make improvements to reach the desired bars. 

Upcycling is like a blessing for all the website owners. At Opensense Labs, we help our clients with the transition of their website. Our services not only provides with splendid modifications but also takes care of all your needs and demands. Ping us at hello@opensenselabs.com 

blog banner blog image Drupal Drupal 8 Upgrading Upcycling Rate of investment Performance Decoupled Drupal Website redesign Blog Type Articles Is it a good read ? On
Categories: Drupal

Spinning Code: Drupal 8 Batch Services

20 January 2019 - 1:04pm

For this month’s South Carolina Drupal User Group I gave a talk about creating Batch Services in Drupal 8. As a quick side note we are trying to include video conference access to all our meetings so please feel free to join us even if you cannot come in person.

Since Drupal 8 was first released I have been frustrated by the fact that Drupal 8 batch jobs were basically untouched from previous versions. There is nothing strictly wrong with that approach, but it has never felt right to me particularly when doing things in a batch job that I might also want to do in another context – that really should be a service and I should write those core jobs first. After several frustrating experiences trying to find a solution I like, I finally created a module that provides an abstract class that can be used to create a service that handles this problem just more elegantly. The project also includes an example module to provide a sample service.

Some of the text in the slides got cut off by the Zoom video window, so I uploaded them to SlideShare as well:


Quick Batch Overview

If you are new to Drupal batches there are lots of articles around that go into details of traditional implementations, so this will be a super quick overview.

To define a batch you generate an array in a particular format – typically as part of a form submit process – and pass that array to batch_set(). The array defines some basic messages, a list of operations, a function to call when the batch is finished, and optionally a few other details. The minimal array would be something like:

<?php // Setup final batch array. $batch = [ 'title' => 'Page title', 'init_message' => 'Openning message', 'operations' => [], 'finished' => '\some\class\namespace\and\name::finishedBatch', ];

The interesting part should be in that operations array, which is a list of tasks to be run, but getting all your functions setup and the batch array generated can often be its own project.

Each operation is a function that implements callback_batch_operation(), and the data to feed that function. The callbacks are just functions that have a final parameter that is an array reference typically called $context. The function can either perform all the needed work on the provided parameters, or perform part of that work and update the $context['sandbox']['finished'] value to be a number between 0 and 1. Once finished reaches 1 (or isn’t set at the end of the function) batch declares that task complete and moves on to the next one in the queue. Once all tasks are complete it calls the function provided as the finished value of the array that defined the batch.

The finish function implements callback_batch_finish() which means it accepts three parameters: $success, $results, and $operations: $success is true when all tasks completed without error; $results is an array of data you can feed into the $context array during processing; $operations is your operations list again.

Those functions are all expected to be static methods on classes or, more commonly, a function defined in a procedural code block imported from a separate file (which can be provided in the batch array).

My replacement batch service

It’s those blocks of procedural code and classes of nothing but static methods that bug me so much. Admittedly the batch system is convenient and works well enough to handle major tasks for lots of modules. But in Drupal 8 we have a whole suite of services and plugins that are designed to be run in specific contexts that batch does not provide by default. While we can access the Drupal service container and get the objects we need the batch code always feels clunky and out of place within a well structured module or project. What’s more I have often created batches that benefit from having the key tasks be functions of a service not just specific to the batch process.

So after several attempts to force batches and services to play nice together I finally created this module to force a marriage. Even though there are places which required a bit of compromise, but I think I have most of that contained in the abstract class so I don’t have to worry about it on a regular basis. That makes my final code with complex logic and processing far cleaner and easier to maintain.

The Batch Service Interface module provides an interface an an abstract class that implements parts of it: abstract class AbstractBatchService implements BatchServiceInterface. The developer extending that class only needs to define a service that handles generating a list of operations that call local methods of the service and the finish batch function (also as a local method). Nearly everything else is handled by the parent class.

The implementation I provided in the example submodule ends up four simple methods. Even in more complex jobs all the real work could be contained in a method that is isolated from the oddities of batch processing.

<?php namespace Drupal\batch_example; use Drupal\node\Entity\Node; use Drupal\batch_service_interface\AbstractBatchService; /** * Class ExampleBatchService logs the name of nodes with id provided on form. */ class ExampleBatchService extends AbstractBatchService { /** * Must be set in child classes to be the service name so the service can * bootstrap itself. * * @var string */ protected static $serviceName = 'batch_example.example_batch'; /** * Data from the form as needed. */ public function generateBatchJob($data) { $ops = []; for ($i = 0; $i < $data['message_count']; $i++ ) { $ops[] = [ 'logMessage' => ['MessageIndex' => $i + 1], ]; } return $this->prepBatchArray($this->t('Logging Messages'), $this->t('Starting Batch Processing'), $ops); } public function logMessage($data, &amp;$context) { $this->logger->info($this->getRandomMessage()); if (!isset($context['results']['message_count'])) { $context['results']['message_count'] = 0; } $context['results']['message_count']++; } public function doFinishBatch($success, $results, $operations) { drupal_set_message($this->t('Logged %count quotes', ['%count' => $results['message_count']])); } public function getRandomMessage() { $messages = [ // list of messages to select from ]; return $messages[array_rand($messages)]; } }

There is the oddity that you have to tell the service its own name so it can bootstrap itself. If there is a way around that I’d love to know it. But really one have one line of code that’s a bit strange, everything else is now fairly clear call and response.

One of the nice upsides to this solution is you could write tests for the service that look and feel just like any other services tests. The methods could all be called once, and you are not trying to run tests against a procedural code block or a class that is nothing but static methods.

I would love to hear ideas about ways I could make this solution stronger. So please drop me a comment or send me a patch.

Related core efforts

There is an effort to try to do similar things in core, but they look like they have some distance left to travel. Obviously once that work is complete it is likely to be better than what I have created, but in the meantime my service allows for a new level of abstraction without waiting for core’s updates to be complete.

Categories: Drupal

OSTraining: How to Create a Voting System in Drupal 8

19 January 2019 - 10:40pm

One of OSTraining’s customers asked how to implement content voting on their site. 

In Drupal 7, the popular choice was the Fivestar module, but that's still not ready for Drupal 8.

In this tutorial, I'll show you how to use the Votingapi Widgets module in Drupal 8. This module makes use of a “Rating” field, which you can customize and insert into your content.

Categories: Drupal

OSTraining: How to Create CSS and Javascript Animations in Drupal 8

19 January 2019 - 10:14pm

There are multiple JavaScript and CSS libraries on the internet. They allow you to animate parts of your site and make them look more attractive.

The Animations module makes it easy to include animations on your Drupal site. 

There's a full list of the animations on the module page, but they include such wonderfully named animations as swing, tada, wobble, jello, bounceIn, bounceOut, fadeIn, rollOut, zoomIn, slideInRight and more.

In this tutorial, I'll show how to install and use the required libraries and the Animations module..

Categories: Drupal

OSTraining: How to Create a Contact Form in Drupal 8

19 January 2019 - 10:06pm

One of OSTraining’s members asked how to create a following web form with the Webform module:

  • The site has four regions to be contacted: NE, SE, NW, and SW.
  • Each region will have their own contact person/team.
  • The user must be able to select one or more of the regions to contact at the same time, using checkboxes.
  • All the submittions must be BCC-ed to the main administrator.
  • If needed, the site owner should be able to add one or more email recipients for each region.

In this tutorial, you will learn how to meet these user requirements using the Webform module.

Categories: Drupal

OSTraining: Videos to Get you Started with Drupal Development

19 January 2019 - 9:54pm
How do I get started with Drupal development?

That's a common question we get from people who join OSTraining for the first time. They want to know about the skills they will need, and what kind of classes they should take.

In this guide, I'll give you an overview to help you get started with Drupal development.

Categories: Drupal

Pages