Welcome to the next installment of the series of articles for Drupal sysadmins. Today, you are going to learn the process and nuances of setting up Nginx so it works as Apache’s front-end on a Debian server.
In the previous article, we covered setup of a web server on a Debian machine and Drupal installation. The solution offered there has a couple of drawbacks:admin Tue, 06/13/2017 - 11:08 Теги
The extraordinary scale of the WannaCry ransomware infection has acted as a dramatic warning to organisations in all sectors. With thousands of organisations worldwide – including a significant proportion of the NHS – falling victim to the ransomware, it’s a timely reminder of the importance of robust cybersecurity.
Your organisation’s website is potentially one of the biggest parts of your overall ‘attack surface’, which cybercriminals will probe for a route into your network. As such, it is vital to implement solid tools and processes specifically designed to protect it against attack – and those tools and processes should be tailored to the content management system underpinning your site.So, if your site is built on Drupal, what are the best practices you should be following?
1. Upgrade to the latest version of Drupal
The WannaCry attack has proliferated so dramatically because it relies on an exploit in an old version of Windows – one that Microsoft is no longer supporting. It is usual commercial practice for vendors and manufacturers to gradually withdraw support from older hardware and software – this is the case with Drupal, as with Microsoft. If you have not yet migrated to the latest version – Drupal 8 – that should be your first priority.
2. Upgrade to the latest version of modules
Drupal is a modular CMS, with thousands of options available to extend your basic system. As such, it is not enough to simply ensure you’re running the latest, best-protected version of Drupal – you need to make sure you’re doing the same with each individual module. The author of each extension is responsible for providing appropriate security upgrades and patches, but these will generally only apply to the latest version of the module. If you’re running an old one, you’re not protected.
3. Remove unnecessary modules
By the same token, running modules on your site that you no longer need is simply increasing your potential attack surface – and your security management burden. Implement a process to ensure that you are continually reviewing all of the modules you have added, and get rid of the surplus.
4. Use the Status Report tool
The Status Report functions sits within your Drupal Admin area. Its job is to alert you to any issues with the code base underpinning your site – which includes out of date modules and code. It is the easiest way to keep on top of your website management and ensure that you are deploying the latest versions of everything. Don’t forget to enable your core update manager module so that you get regular notifications.
5. Practice strong user management
As the old saying goes, people are the weakest link in any security chain. Keeping a tight handle on the people who actually use your website can dramatically shore up your overall security posture. Undertake a regular check to ensure that you are removing inactive users such as those who have left the organisation, and ensure that those who remain only have access to the minimum areas of the site they need to, not the whole site by default.
Various functions are available within Drupal to shore up login and user processes, such as the Login Security module, which restricts unauthorised access attempts, and blocking the ‘user #1’ account that is created during setup, which automatically has all permissions in place.
6. Monitor your logs
Drupal’s integrated log viewer, within the reports area, is an extremely valuable tool when it comes to ascertaining that a cyberattack is taking place and assessing what has actually happened. Make sure you check your log reports regularly, and are alert to early warning signs such as failed login attempts.
7. Enable HTTPS
HTTPS is most commonly used for ecommerce sites and online banking, but any site that transfers sensitive information between user and web server should also be using it.
These seven best practices will have a dramatic effect on the overall security of your Drupal website, and ensure you can continue benefitting from the flexibility of the platform without sacrificing protection.
In this blog post, our technical lead Kevin guides you through the best caching strategies for Drupal 8.
Flow improvements with Drupal 8
The way data is cached has been overhauled and optimized in Drupal 8. This means that cached data is aware of where it is used and when it can be invalidated, which resolved in two important cache bins responsible for holding the rendered output, cache_render and cache_dynamic_page_cache. In previous versions of Drupal, the page cache bin was responsible for rendered output of a whole page.
Consequently, the chance of having to rebuild a whole page in Drupal 8 is far lower than in previous versions, because the cache render bin will contain some blocks already available for certain pages - for example a copyright block in your footer.
Nevertheless, having to rebuild the whole render cache from scratch on a high-traffic website can result in a lot of insert query statements for MySQL. This forms a potential performance bottleneck.
Sometimes you need to rebuild the cache. Doing this on large sites with a lot of real-time visitors can lead to a lock timeout of MySQL, because the cache tables are locked by the cache rebuild function. This means that your database is unable to process the cache sets queries in time and in worst case resulting into a down time of your website.
Using Memcache allows you to directly offload cache bins into RAM, which makes cache sets, speeding up the cache along the way and allowing MySQL more breathing space.
Before you can connect to memcache, you need to be sure that you have a memcache server up and running. You can find a lot of tutorials how to do this for your distribution, but if you use MAMP PRO 4 you can simple spin the memcache server up. By default, memcache will be running on port 11211.
When you have the memcache server specifications, host IP and port you need to download and install the Memcache module, available here: https://www.drupal.org/project/memcache
This module is currently in alpha3 stage and ready to be used in production sites.
Once you have installed the module, it should automatically connect to memcache using the default settings. This means that the memcache server is running on localhost and listening on port 11211. If your server is running on a different server or listening on another port you need to modify the connection by changing the following line in your settings.php.$settings['memcache']['servers'] = ['127.0.0.1:11211' => 'default'];
Once you have installed memcache and have made the necessary changes to the settings.php file to connect to the memcache service, you need to configure Drupal so it uses the Memcache cache back end instead of the default Drupal cache back end. This can be done globally.$settings['cache']['default'] = 'cache.backend.memcache';
However, doing so is not recommended because it cannot be guaranteed that all contrib modules only perform simple GET and SET queries on cache tables. In Drupal 7, for example, the form caching bin could not be offloaded to Memcache, because it can happen that the cache key gets overwritten with something else resulting in a cache miss for specific form cache entries.
Therefore it is recommended to always check if the cache bin is only used to store cache entries and to fetch them later on while not depending on it to be in cache.
Putting cache_render and cache_dynamic_page_cache into memcache is the safest and most beneficial configuration: the larger your site, the more queries those tables endure. Setting up those specific bins to use Memcache can be done with the following lines in settings.php.$settings['cache']['bins']['render'] = 'cache.backend.memcache'; $settings['cache']['bins']['dynamic_page_cache'] = 'cache.backend.memcache';
How does it work?
To be able to test your setup and finetune Memcache, you should know how Memcache works. As explained before, we are telling Drupal to use the cache.backend.memcache service as cache back end. This Service is defined by the Memcache module and implements like any other cache back end the CacheBackendInterface.This interface is used to define a cache back end and forces classes to implement the necessary cache get, set, delete, invalidate, etc. functions.
When the memcache service sets a cache entry, it stores this as a permanent item in Memcache, because validation is always checked in cache get.
Invalidation of items is done by setting the timestamp in the past. The entry will stay available in RAM, but when the service tries to load it it will detect it as an invalid entry. This allows Drupal to recreate the entry, which will then overwrite the cache entry in Memcache.
Conclusion: when you clear all cache with Memcache installed, you will not remove all keys in Memcache but simple invalidate them by setting them with an expiration time in the past.
Simply using Memcache will not always mean that your site will be faster. Depending on the size of your website and the amount of traffic, you will need to allocate more RAM to Memcache.
How best to define this amount? If you know how much data is currently cached in MySQL, this can help to summarize the sizes of all cache tables and check how much of these tables are then configured to go into Memcache.
Let me give an example: consider a 3GB cache_render table and a 1GB cache_dynamic_page_cache table, resulting in 4GB of data that would be offloaded to Memcache. Starting with a 4GB RAM setup for Memcache would give you a good start.
But how can you check if this setup is sufficient? There are a few simple rules to check if you have assigned sufficient -or perhaps too much - RAM to Memcache.
- If your evictions are increasing, meaning that memcache is overwriting keys to make space. And your hit rate is lower than 90% and dropping, you should allocate more memory.
- If your evictions are 0 but the hit rate is still low, you should review your caching logic. You are probably flushing caches to often or your cached data is not reused, meaning that your cache contexts are too wide.
- If your evictions is at 0 and your hit rate is 90 and higher, and the written bytes in memcache is lower than the allocated RAM, you can reduce the amount of RAM allocated to Memcache.
It is very important that you never assign more RAM than available. If your server needs to start swapping, the performance will drop significantly.
If you are considering using memcache for Drupal, you need to think a few things through in advance:
- Which cache bins will be offloaded into Memcache? Only offload cache tables that do not depend on an cache entry.
- Does the site has a lot of traffic and a lot of content? This will result in larger render cache tables.
- The amount of RAM allocated to Memcache, depending on the amount available on your server and the size of the cache bins you offloaded to Memcache.
Also keep in mind that the allocation of RAM for Memcache is not a fixed configuration. When your website grows, the cache size grows with it. This implies that the amount of necessary RAM will also increase.
We hope this blog post has been useful! Check our training page for more info about our Drupal training sessions for developers and webmasters.
This module disables CKEditor own context menu (right-click popup menu) and allows the Browser's own context menu to display normally. This will allow users to use the browser's built-in spell checker, autocorrect and other built-in options.
NOTE 1: This is done by disabling the contextmenu, tabletools, and tableresize plugins. If you need these functions, do not install this module.
NOTE 2: There may be options in the browser's context menu that may not be handled properly within the CKEditor editor. Test these browser context menu options before relying on them.
Adam Bergstein (nerdstein) joins Mike Anello to discuss the potential need to evolve Drupal Community Governance.Interview
- Drupal Governance with Megan Sanicki and Whitney Hess - Drupalize.me podcast.
- The Process for Evolving Community Governance - blog post by Whitney Hess.
- Community Governance Considerations of Open Source Projects - blog post by Adam Bergstein.
- Drupal.org governance page.
- MyDropWizard.com - Long-term-support services for Drupal 6, 7, and 8 sites.
- WebEnabled.com - devPanel.
- DrupalCamp Asheville - July 14-15, 2017.
- Playing with is kids.
- Docker for Mac.
- Making Drupal 8 core an amazing experience for content authors.
- Holding an alligator.
- Working with Drupal at Penn State
If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.
An Interview with Eric Scott Sembrat; Web Developer, Graduate Student | Atlanta, GA
The Mailcamp module provides integration with the email service Mailcamp. It currently allows you to create a block with a signup form to sign up for mailing lists.Usage
Enter your Mailcamp API credentials in the settings form.
Place a new block in the block layout and select the 'Mailcamp signup' block.
Select which mailinglists the user should be subscribed to after filling out the signup form. If you have fields defined in your mailinglist, you have the option to show these on the signup form as well.
This module extends Workbench Access functionality to control access to editing taxonomies.
Workbench access works with node entities only. This module allows to also restrict editing of taxonomy terms in select vocabularies. Here is an example use case where this module may be handy:
A module for those who need to import and export taxonomy terms via csv.
None known for Drupal 8
Latest release of Drupal 8.x.
Enable the module
Go to /admin/config/content/term-csv-import
Go to /admin/config/content/term-csv-export
A Content Management Workflow is used by Media Enterprises to have control over authorship, editing and publishing accesses and roles assignment for altering states, cycles and content types for users.
Content Workflow is also known as Content Governance Model. A Content Workflow can define the roles, responsibilities, documentations and workflow of Content. Media enterprises have responsibilities to ensure they have a smooth workflow because it usually involves a lot of processes and people ranging from the author to the editor a publisher and also a creatives team.
A defined model of workflow involves all the stakeholders from planning…
You have a site where you want your users to create nodes, blog pages etc. You use tag style taxonomy terms on these nodes. You want your trusted site editors to be able to add new terms when they create content but your regular users are polluting your tags vocabulary with misspelled words. How can you stop these users creating new terms but allow your editors to add terms whilst writing content? The answer is this module!
Originally posted on LinkedIN.
The Government of Canada’s Web Renewal Initiative has failed. It may not be public yet, but there really is no way to redeem this half-conceived initiative to centralize all government pages onto a single website - Canada.ca.
This goal was lifted from the UK Government’s Government Digital Services (GDS). The goal of the GDS team was no less than digital transformation. Our government appears to have mistaken the alpha.gov.uk site as the end goal, rather than a platform with which to experiment with new ideas in government usability. The GDS is continuing to innovate to better serve the needs of their citizens, and having an open strategy allows for them to have their ideas validated by the world.
The Web Renewal Initiative (mega-migration to Canada.ca) was started by the Conservative government who was obsessed with centralizing communications & outsourcing as much as possible to the private sector.Centralizing on Canada.ca was a Bad Idea
Serving all public Government of Canada content via a single site guarantees that this project will not be able to Fail Forward and learn through constant iterations. If governments are going to learn and grow with their IT projects they need to be structured so that public servants are able to take on small risks. Building the “one site to rule them all” will ultimately leave everyone focused on limitations of the tool rather than the needs of the user.
There is not a single user for government sites. There is no way to appeal to the scientists, students, seniors, travellers and businesses owners, just to name a few, through a single voice. You do need a single Canada.ca site to be able to effectively answer most questions of citizens, but also need to be able to direct them to a more detailed department site if they want more information.
Many departments also have websites or web apps that they have built for specific purposes. Most government sites aren’t as active as weather.gc.ca and won’t need their content to be updated 100s of times an hour. People go there for one specific reason (to get a permit, to find out if a drug is approved, to find the address of our High Commission in DC), and Canadians depend on this service. There are countless other examples where an agency might choose to set up a new website to try to target an audience or need which their departmental site cannot satisfy.
This project went off the rails before the RFP was even awarded. The very first item in the UK Government Digital Service’ Design Principles is to Start with user needs. Although there are great Usability folks who have been involved, there hasn’t been a mandate of “service transformation”, to really put users first. The rushed mandate of Canada.ca started with a bunch of assumptions and hasn’t brought on the user researchers or data analytics people to understand how to better meet user needs, let alone talk to users. The best hope with Web Renewal would be that it could save money, it was not designed to improve service.
It is worth mentioning that this initiative is built on proprietary software and managed completely by American-based international corporations. This approach does not support the broader public policy of a modern, open by default government that is supporting Canadian innovation. The process of centralizing and outsourcing Government IT makes it inevitable that multinational corporations are going to win contracts. Most Small & Medium-sized Enterprises (SME) just don’t have the resources to bid on multi-million dollar contracts let alone win them. When leveraging open-source, large projects can be broken down into smaller ones that will allow more Canadian companies to become involved.
Whether it is a giant multi-national or a small business, it is never a good idea for government to give a monopoly to a private sector company, like they did for Canada.ca. The vendor lock-in that comes with proprietary software makes it even worse as any transition away will include both migrating to a new technology stack as well as finding a new company to provide support.
I’ve previously highlighted the many problems with the implementation of Canada.ca. It is now time for everyone to admit that Web Renewal has failed. But if we do that, what should it be replaced with? What can be learned from this experiment and pulled forward into a plan that to help build the innovative modern government that Trudeau has promised Canadians?
I don’t think anyone is calling for a return to how government developed websites before Web Renewal. There does need to be more structure. There were too many orphaned projects that lacked proper accessibility, security & branding. What is the alternative?10) Make things open: it makes things better
This is the final item in the UK GDS Design Principles. Last but not least, particularly since it frames the Open Government approach that is framing this discussion around the world. Building in the open has a great many advantages which have been articulated very clearly by government leaders in the UK, USA, Australia, France, Spain, and indeed most of the G20.
“Open source software can support the Digital Government Strategy's "Shared Platform" approach, which enables Federal employees to work together-both within and across agencies to reduce costs, streamline development, apply uniform standards, and ensure consistency in creating and delivering information.” - U.S. Department of Health & Human Services Website
At the 2016 Open Government Partnership meeting in Paris the importance of Open Source was acknowledged by governments around the world, including the Government of Canada.
So start with an open platform. The tool doesn’t particularly matter, but the approach absolutely does. There are almost no acceptable reasons why the government should ever build software from scratch. Governments need to find existing software communities and become engaged with them.
- Review open-source software in use by our closest allies (USA, Australia, New Zealand, the EU & it’s members countries)
- Experiment with public repositories other governments have shared
- Adopt several that meet Canada’s unique needs in specific domains
With the rate of change in IT, just to keep up, organizations need to be constantly investing in their workforce to ensure that they have the modern skills required. Working in the open makes developers more careful with their code. If your work is going to be published, you want to make sure that it is well written, documented and not introducing embarrassing bugs. Having a good reputation is increasingly important in the internet age. Working in the open also allows governments to have their work verified by external developers (for free).
“By making our code open and reusable we increase collaboration across teams, helping make departments more joined up, and can work together to reduce duplication of effort and make commonly used code more robust.” - Anna Shipman, Open Source Lead UK GDS
To increase the collaboration outside of government it is always useful to release code under a commonly used license (such as the GPL, MIT or Apache) which aid with the distribution. The Open Government Licence adopted by Canada might become well understood in Canada, but not internationally. The US government defaults to Public Domain, which is very pervasive and also well understood.Prepare for Linguistic Diversity
The ability to fully manage bilingual content is difficult for many sites. The Government of Canada also needs to be able to support languages of First Nations, Inuit, Métis and New Canadians. Any Content Management System (CMS) chosen should be able to support, at a minimum, the orthographies of Ojibwe and Inuktitut in addition to languages like Arabic & Chinese which is the first languages for many Canadians. There are several open-source solutions that can already address our complex linguistic requirements.
With a commitment to open-source one could also build in decentralized readability evaluator to ensure that the content author knows how complex their work is (in real time) and that departments can assess a cross-site picture of their content. Writing in Plain Language isn’t something that comes naturally, but it is an important part of any accessibility or usability goals. There are well established open-source tools that already allow for multiple ways to evaluate language complexity, it is simply a matter of ensuring that it is built into the new websites that are used for creating the content.Commit to Adopting Open Standards
When the Government of Canada formally gives up it’s goal to implement one site for the entire public service, We need to see a real commitment to Open Standards. Software interoperability allows the government to move the discussion away from specific tools and to broader cross-departmental needs. The UN’s International Telecommunication Union (ITU) defines them this way:
“‘Open Standards’ are standards made available to the general public and are developed (or approved) and maintained via a collaborative and consensus driven process. ‘Open Standards’ facilitate interoperability and data exchange among different products or services and are intended for widespread adoption.”
The World Wide Web Consortium (W3C) is such a body, and has ongoing committees that work to improve standards like HTML, Web Accessibility Initiative – Accessible Rich Internet Applications (WAI-ARIA) as well as Web Content Accessibility Guidelines (WCAG) 2.0 & Authoring Tool Accessibility Guidelines (ATAG) 2.0. Some of these are used to base government initiatives like the Web Experience Toolkit, as well as the Common Look and Feel before that.
An important W3C standard for this discussion are the Semantic Web Standards most fundamentally the Resource Description Framework (RDF). One could also look at a machine readable markup language like the W3C’s eXtensible Markup Language (XML) or even cutting edge features like Web Components. The important thing is that there is a set of agreed to standards with which government websites can effectively exchange information with each other.A Coordinated Decentralized Approach
I don’t know of a government that has fully embraced the Semantic Web, but the technology is already well established. Adopting this set of standards would allow for the realization of much deeper content sharing between networked sites. With a cohesive implementation you can divide the roles of content generation and content curation.
In Part 2 of this article I will elaborate on how this approach could be leveraged within the Government of Canada.Part 2: Implement a Federated Architecture
The Government of Canada may require 1000+ websites to effectively engage with all of the various people, organizations and other government agencies stakeholders. Maybe it is as few as 100, but it doesn’t make any sense to select an arbitrary number here. We will only know how many sites the Government of Canada needs when we understand the users better. The GDS’s first principle, Start With User Needs, is key. We know that there are going to be more than a handful and that there will inevitably be overlapping content.
Certain departments must have authority over some content and that this content should be distributed across government so that it is timely and accurate. This was one of the problems that Web Renewal was attempting to resolve by centralizing everything.
With a commitment to Open Standards it is possible to build a federated approach to content so that this can be accomplished. Any modern CMS ca expose content in a machine readable format (to everyone) so that it is open by default. It can then be consumed (either by people or machines) so that it can be easily syndicated within another sites domain.Some Practical Examples
Health Canada should be the authority on all information related to health. We can identify places where health information should be included in:
- Global Affairs Canada to help assist travelers
- Immigration and Citizenship in the application process
- GCTools for the public sector employees
- Weather.gc.ca might be useful for seasonal warnings
- Canada.ca the central government hub
Health Canada would be responsible for generating the content, and other government sites would simply be responsible for curating it. For the next SARS or bird flu-like scare Canadians need a central means to manage and update health information, but that can be automated through a federated architecture.
Similarly it would be useful to be able to use government sites to alert people if there are weather warnings in their area. Obviously you only want to include location specific warnings on government sites, when you have confidence about the location of the user. However, it would be possible through a federated distributed network to be able to share this information so as to protect Canadians.Some Advantages of a Federated Approach
The current configuration of Canada.ca presents a number of security challenges, that can be overcome with a federated approach. You could set up a workflow of content between internal departmental sites that are inaccessible to vendors, contractors and non-authorized personnel until it is published to external public facing sites where content is exposed to the public after it has cleared the appropriate approvals.
Having multiple sites in multiple environments will make it much more robust, Web Renewal has created a single point of failure (as well as a huge bottleneck for content). Working with open-source communities that have a critical mass of users will also ensure that your infrastructure is not relying on “security by obscurity”.
The site that generates the content doesn’t need to be the site which displays the content. It makes sense that it would in most instances, but perhaps not all. The point of a central site though is to curate information to help see that users are able to get the information that they need as quickly as possible. The central site should not be where most content is generated.
The Government of Canada is attempting to modernize. The new Experimentation Direction for Deputy Heads, has a lot of potential but is severely restricted by Web Renewal. Being able to provide a sandboxed version of Canada.ca for people to experiment with would be a game changer for people wanting to innovate. Providing a simple framework for A/B testing is key if we are to know how to best to interact with Canadians.
If Canada.ca becomes just a light framework that collates information from other government sites, there is no reason that this couldn’t be distributed. A central agency could experiment with several versions each of which could independently build up-to-date information from live departmental sites. With a proper cloud-based environment it would be trivial to spin up a new variation, direct a percentage of the traffic to the new instance of the site and evaluate what impacts a change has on user’s behavior.The Fate of Canada.ca
Obviously we still need a central website for citizens to engage with citizens. Like Ontario.ca, there needs to be a good starting point for everyone looking for government services. Citizens who don’t know where to go need a starting point. But frankly it doesn’t even necessarily need to be a CMS as one could use a static site generator to generate static web pages that are secure & robust, much like GitHub Pages does.
Ideally it would be great to have personalization in this central site to help guide people to the resource that they need, but there are many ways that it could simply aggregate information from federated departmental authorities and display it as part of Canada.ca.
Obviously search will be key with this. However, once all of the departmental information is in a machine readable format it will much easier to provide one or more search options which may be better suited for different needs. Many users are already going to start at Google.ca, so simply embedding a Google Search into the government doesn’t necessarily give Canadians a better experience.Integrating with other Levels of Government
Once you have Government of Canada departments onboard, you it will be also possible to integrate with other government agencies. Citizens don’t really care what level of government is responsible for their problem, they just want the problem to go away. But using an open, federated architecture provincial and municipal departments can both include information from the Government of Canada in their sites (in real-time with no manual intervention) and share their data (which could be aggregated as needed).
If everything developed by the Government of Canada is developed with an Open by Default approach and shared back to the public, then it will be easier for other organizations to engage with government as a platform for innovation. We will see the solutions spearheaded by government (like the Web Experience Toolkit) used and extended by other organizations. We will find it easier and more cost effective to implement secure, accessible, bilingual solutions which can be adopted by Canadian organizations for their own needs.Long Live Canada.ca
There is a path forward. Let’s stop spending money on expensive American proprietary software solutions, and start investing in a Federated Open Departmental Web Strategy. Canada needs the public sector to be championing open-source and open standards if we are going to catch up with our allies.
A cultural change is needed to make this happen. It won’t be easy, but we know that with leadership and courage that huge changes have taken place in the least likely places. Dave Rogers and Steve Marshall of the UK’s Ministry of Justice, have said that their “public code repository is an important part of our recruitment strategy.” If the government is interested in recruiting new talent, this could be an important step.
That being said, because they are built in the open, we can catch up quickly if we are able to find the leadership to make it happen.
This outline has been mostly focused on changing the technology, but this federated distributed network will allow communications departments to be more agile & responsive as well. I have trouble imagining any modern organization starting to write a web page by opening up a Microsoft Word document. The web has more than enough capacity as a publishing framework that this step simply gets in the way. Canadians expect their government to be less rigid and more timely and by decentralizing communications tools we can help make that a reality.
Having the right tools in place allows for better workflow management with proper content controls. The end result should be empirically knowing that government sites are always getting better at meeting needs of users.Topic:
Ever wondered how Drupal 8 authenticates a user? Let's do a deep dive and find out.
In this journey, we will encounter a few new concepts which I'll try and explain briefly here and in detail in separate blog posts. Many of these concepts are borrowed from Symfony and adopted in Drupal 8. The journey of a request begins in a Symfony component called HTTP kernel. The job of HTTP kernel is to handle requests and respond to them in an event driven way.
Over the past 6 years, we've training hundreds of people through our 12-week Drupal Career Online class, our new 6-week Mastering Professional Drupal Developer Workflows with Pantheon class, as well as our dozens of public and private trainings (literally) around the world. As part of our long-form 12- and 6-week classes, we've been providing on-going support for our graduates in the form of DrupalEasy Office Hours.
Each week, we set aside two hours for any current student or graduate of any of our long-form classes to join our online classroom to ask just about any Drupal-related question they have. It might be about a project they're working on, something they learned in the course, or advice on how to tackle something that is a bit outside of their comfort zone. Regularly using screen-sharing, we can almost always help the person with their request - and most of the time, those watching pick up a thing or two as well.
The most rewarding aspect of DrupalEasy Office Hours (for us, at least) is watching students helping students. As Robert A. Heinlein once said, "when one teaches, two learn" is something that we try to encourage in all of our classes as well as DrupalEasy Office Hours.
This type of learning community has been a hallmark of what DrupalEasy training, consulting, and project coaching is all about. By engaging a subset of the larger Drupal community, our students gain experience, knowledge, and most of all - the confidence to ask fellow community members for help in an environment that is supportive and nurturing.
Over the past few years, we've heard of similar programs by various Drupal shops who provide a similar service for their clients. We can't think of a better way to provide on-going goodwill and mentoring.
If you're a graduate of one of our long-form classes, be sure to pop-in and say hello (contact us for details).
Most of the information I have come across about migrating from Drupal 6 to Drupal 8 is about migrating content, however before tackling this problem another one must be solved, maybe it is obvious and hence understated, so let's spell it out loud: preserving the site functionality. That means checking if the contrib modules need to be ported to Drupal 8, and also checking if the solution used in the previous version of the site can be replaced with a completely different approach in Drupal 8.
Let's take ao2.it as a study case.
When I set up ao2.it back in 2009 I was new to Drupal, I choose it mainly to have a peek at the state of Open Source web platforms.
Bottom line, I ended up using many quick and dirty hacks just to get the blog up and running: local core patches, theme hacks to solve functional problems, and so on.
Moving to Drupal 8 is an opportunity to do things properly and finally pay some technical debt.
For a moment I had even thought about moving away from Drupal completely and use a solution more suited to my usual technical taste (I have a background in C libraries and linux kernel programming) like having the content in git and generate static web pages, but once again I didn't want to miss out on what web frameworks are up to these days, so here I am again getting my hands dirty with this little over-engineered personal Drupal blog, hoping that this time I can at least make it a reproducible little over-engineered personal Drupal blog.
In this series of blog posts I'll try to explain the choices I made when I set up the Drupal 6 blog and how I am re-evaluating them for the migration to Drupal 8.The front page view
ao2.it was also an experiment about a multi-language blog, but I never intended to translate every content, so it was always a place where some articles would be in English, some in Italian, and the general pages would be actually multi-language.
This posed a problem about what to show on the front page:
- If every node was shown, there would be duplicates for translated nodes, which can be confusing.
- If only nodes in the current interface language were shown, the front page would list completely different content across languages, which does not represent the timeline of the blog content.
So a criterion for a front page of a partially multi-lingual site could be something like the following:
- If a node has a translation in the current interface language, show that;
- if not, show the original translation.
In Drupal 6 I used the Select translation module which worked fine, but It was not available for Drupal 8.
So I asked the maintainers if they could give me the permission to commit changes to the git repository and I started working on the port myself.
The major problem I had to deal with was that Drupal 6 approached the multi-language problem using by default the mechanism called "Content translations" where separate nodes represented different translations (i.e. different rows in the node table each with its own nid), tied together by a tid field (translation id): different nodes with the same tid are translations of the same content.
Drupal 8 instead works with "Entity translations", so one single node represents all of its translations and is listed only once in the node table, and actual translations are handled at the entity field level in the node_filed_data table.
While at it I also took the chance to refactor and clean up the code, adding a drush command to test the functionality from the command line.
The code looks better structured thanks to the Plugin infrastructure and now I trust it a little more.Preserve language
On ao2.it I also played with the conceptual difference between the “Interface language” and the “Content language” but Drupal 6 did not have a clean mechanism to differentiate between the two.
So I used the Preserve language module to be able to only switch the interface language when the language prefix in the URL changed.
It turns out that an external module is not needed anymore for that because in Drupal 8 there can be separate language switchers, one for the interface language and one for the content language.
However there are still some issues about the interaction between them, like reported in Issue #2864055: LanguageNegotiationContentEntity: don't break interface language switcher links, feel free to take a look and comment on possible solutions.
More details about the content language selection in a future blog post.
Media Entity Usage is a module which allows content editors to check if some media is referenced in another entity. At it own it does nothing besides providing additional page to review references and providing views field to use at media administration view that shows references counter.
Media Entity Usage has 2 submodules that collects information about references.
- MEU Node - collects information about referencing nodes
- MEU Paragraphs - collects information about referencing paragraphs and its parent entities
You are set value from node title for ALT properties image field.
An absolute positioned to top notification ribbon for your visitors.
No need to worry about hassle of adjusting width and height of css ribbons.
this plugin ribbon can wrap around any divs.
Just write few lines code and get any divs wrapped with ribbon.
Works in all major browsers.
DebugMe is a visual feedback, issue tracking & project management solution which saves time and frustration for everyone during a website project. This module allows you to quickly and easily add DebugMe to your Drupal site. The module allows you grant access to DebugMe roles and even turn it off for selected pages. DebugMe is free for 2 projects and 2 users.