Newsfeeds

Srijan Technologies: Integrating Drupal with AWS Machine Learning

Planet Drupal - 23 August 2019 - 1:00am

With enterprises looking for ways to stay ahead of the curve in the growing digital age, machine learning is providing them with the needed boost for seamless digital customer experience.

Categories: Drupal

Agiledrop.com Blog: Interview with pinball wizard Greg Dunlap, Senior Digital Strategist at Lullabot

Planet Drupal - 22 August 2019 - 11:19pm

This time we talked with Greg Dunlap, pinball wizard and Lullabot's senior digital strategist. We spoke of how satisfying it is to work on interesting things with the right people, the importance of the Drupal Diversity and Inclusion initiative, and the close similarities between the Drupal community and Greg's local pinball community.

READ MORE
Categories: Drupal

Freelock : Better Care Network

Planet Drupal - 22 August 2019 - 2:15pm
Better Care Network Don Dill Thu, 08/22/2019 - 14:15

More to come soon....

Categories: Drupal

Hatch and Vodafone partner to bring 5G mobile game streaming to Germany

Social/Online Games - Gamasutra - 22 August 2019 - 12:31pm

Rovio-owned Hatch is working with the phone service provider Vodafone to bring its streaming, 5G mobile game platform to select Android phones in Germany. ...

Categories: Game Theory & Design

Module Locator

New Drupal Modules - 22 August 2019 - 12:27pm

This simple module allow to see module path (location) on module list page.

Categories: Drupal

Dropsolid: Contributing to Open Source, what's your math?

Planet Drupal - 22 August 2019 - 11:58am
22 Aug

1xINTERNET made a great post about calculating how much you are giving back and contributing to Open Source. Open Source makes our business model possible and for some people in the company, and certainly yours truly, made a career out of it. Our CEO even started Dropsolid as he thought we could approach this differently. If this makes a difference for us, why not for some of our employees as well, or could it even be a big impact in recruitment?

Delivering Open Digital Experiences with Open Source components on our own Dropsolid Platform is also is very interesting for clients. We are able to deliver quality projects for a budget someone once could only dream of. The downside is that you are not alone that has this magical ability, so it is a competitive landscape. The question then is how you can stand out, how can you make that difference.

Let's honor 1xINTERNET by doing our math.

How do we contribute

Our contribution to the Drupal project can be divided in the same 3 areas:

  • Community work
  • Event, Sponsorships, and memberships
  • Source code
Community work

During the last year and even this year we spent a lot of time helping the Belgian Drupal Community. We organised Drupal User Group meetings, helped with organising Drupalcamp Belgium and are currently actively involved in organising Drupal Developer Days. Next to that is yours truly also active with the European Splash Awards as an organisational member and active as Chair of the Belgian Drupal Community. We also had several speakers from Dropsolid on all these events and I've also volunteered as Drupalcon Track Chair or Co-Chair for the DevOps track, both Drupal Europe and Drupalcon Amsterdam. Attending those meetings during the day or evening takes time, but is well worth it. And next to that we gave back to the community by going to Drupalcamp Kiev and Drupal Mountain Camp to share the knowledge.

When taking all of the above, time spent on community activities adds up to around 1 FTE. This includes time spent to organise it, give back with speaker preparations and time spent to organise those sponsorships and facilitate our employees to attend. For your information, in 2018 we had on average 65 people working for Dropsolid.

Sponsorships and memberships

Since 2018 we invested in the following events and memberships

  • Silver Sponsor for Drupal Europe 2018
  • Gold Sponsor for Drupalcamp Belgium
  • Gold Drupaljam XL 2019
  • Diamond Sponsor Drupalcon Amsterdam 2019
  • Organisation Member of Belgian Drupal Community
  • Organisation Member of the Drupal Association
  • Supporting Partner of the Drupal Association
  • Donation to the Promote Drupal Fund
  • Employ a Provisional member of the Drupal Security Team

in total we spent close to 1% of our total yearly spent in sponsorships, travel to and memberships related to the Drupal project.

Drupal.org contributions

Without a doubt we actively contribute back to the Drupal.org source code. Some contributions are also meetings from the community events and other important milestones that matter for the community. All efforts matter, not just code.

We contributed our complete install profile Dropsolid Rocketship and all our paragraphs that come with it. Dropsolid has a dedicated R&D team that consists out of 8 people. This team has a Drupal Backend & Frontend R&D Engineer, a couple DevOps engineers, 2 machine learning geniuses and myself. Next to that we supported one of our employees in his personal pet project drupalcontributions.org. Let's take a look at that site to come up with some numbers.

We support over 30 projects, have over 207 credits, with just 58 credits in the last 3 months and 181 comments in the last 3 months. I dare to say that we are ramping it up.

We currently rank nr. 42 of the organisations that contribute most back to Drupal in the last 3 months. Compared to 1xINTERNET we have a bigger headcount, but nevertheless I am equally proud, just as they are, to see this happen and to make an impact. 

Not only do we contribute with our own agenda, we also contract Swentel to spend just 1 day a month on Drupal contributions without any agreed agenda. By just supporting 1 day a month, this accounted to 17 comments in the last 30 days alone. To make a point, just counting credits isn't fair. It's how you come together and find a common solution. Communication is key and we support this with hard earned money.

How much did we contribute since 2018?

So, given that from the 8 R&D persons, we at least contribute back 1 FTE to the Drupal ecosystem. Combined with contributions that happen back from the projects by our development teams and the core contribution sponsorships for Swentel we easily get to 2 FTE's to contribute back on a total headcount of 65. 

If we add up the efforts, next to the payroll, in community work, sponsorships and memberships Dropsolid contributes an equivalent of +- 4.5% of our annual yearly spent in 2018. With the Drupalcon Amsterdam in 2019 happening as a Diamond Sponsor, this will be even bigger. We're aiming to send more than 20 payroll employees to Drupalcon and have 3 selected sessions! We're also not cutting back on our efforts to contribute back, on the contrary!

War on Talent

Being visible in the ecosystem with code contributions, event contributions, sponsorships helps to be a company people want to work for. Obviously this is not the only motivation but given that in our technical teams we employ around 1/4th of the employees as remote developers, a good company brand is important. We see that this works and this helps us to attract and retain highly skilled employees. We now have employees in countries such as Portugal, Slovenia, Poland, Ukraine, Romania, Bulgaria, ...

 

Lead, don't just follow

By actively participating, whether it is code or events or anything that decides the future of Drupal or for that matter Digital Experiences, you are ahead of the curve. This allows for more competitive offers towards clients and this helps you win deals that as a follower of the pack might not be possible. Dropsolid believes in taking the lead when it comes to being the best Drupal partner in Greater Europe, and is proud to believe we are. Up to you to believe us or not ;-)

Are you a client or looking for a job and are interested in hearing more, don't hesitate to contact us.

Nick Veenhof
Categories: Drupal

Layout Builder UX

New Drupal Modules - 22 August 2019 - 11:49am

Iterating on Layout Builder UI for usability improvements.

Categories: Drupal

Bounteous.com: Drush 8 to Drush 9 Migration Path

Planet Drupal - 22 August 2019 - 11:28am
With the release of Drupal 9 coming soon, the need to update form Drush 8 to Drush 9 is imminent.
Categories: Drupal

The first week of a Kickstarter - by Nic Rutherford

Gamasutra.com Blogs - 22 August 2019 - 8:18am
More information about the Fringe Planet Kickstarter - a week into the adventure
Categories: Game Theory & Design

How to Fail in F2P Mobile Games Publishing, Part 1 - by Matthew Emery

Gamasutra.com Blogs - 22 August 2019 - 7:49am
95% of mobile F2P games are unprofitable. In this series of articles, I’ll share my insights on the most common reasons why mobile games don’t reach profitability. To kick things off, let’s start with: starting development without a clear business c
Categories: Game Theory & Design

Gnomecast #73 – Navigating the Casual

Gnome Stew - 22 August 2019 - 5:00am

Join Ang, Jen, and Matt for a discussion about how to handle players that are less invested in the game. Are these gnomes serious enough to avoid getting tossed in the stew this week?

Download: Gnomecast #73 – Navigating the Casual

Check out Commandroids: A World Transformed on Kickstarter through September 2nd!

Get details on GEM fund-matching for the 2019 IGDN Metatopia Sponsorship here.

Follow Jen at @JenKatWrites on Twitter, check out the JenKatWrites Patreon, and check out her blog First Sight Second Thoughts.

Do not try to follow Matt on the Internet, but check out his articles at Gnome Stew!

Follow Ang at @orikes13 on Twitter and see pictures of her cats at @orikes13 on Instagram.

Keep up with all the gnomes by visiting gnomestew.com, following @gnomestew on Twitter, or visiting the Gnome Stew Facebook Page. Subscribe to the Gnome Stew Twitch channel, check out Gnome Stew Merch, and support Gnome Stew on Patreon!

For another great show on the Misdirected Mark network, check out Wednesday Evening Podcast All-Stars!

Categories: Game Theory & Design

Amazee Labs: Contribution and Client Projects: Part Two

Planet Drupal - 22 August 2019 - 4:23am
The first part of this article described why and how the stakeholders of a project can contribute to Drupal. This developer-oriented article is a summary of the Drupal.org documentation for new code contributors. We will cover: how to work on the issue queue, how to publish a project, and how to approach this process with Drupal 9 in mind. 
Categories: Drupal

Entityreference view filtered formatter

New Drupal Modules - 22 August 2019 - 12:50am

Very simple module that implements new formatter that display the referenced entities rendered by entity_view() but filtered by the results of a view configured from field formatter settings.

Categories: Drupal

Business of Gaming Retail: When Is It Time to Hire?

RPGNet - 22 August 2019 - 12:00am
Hiring Your First Employee
Categories: Game Theory & Design

Zenziva SMS Integration

New Drupal Modules - 21 August 2019 - 4:32pm

This module provides integration between the Zenziva SMS service and the SMS framework module.

Categories: Drupal

Agaric Collective: Migrating Microsoft Excel and LibreOffice Calc files into Drupal

Planet Drupal - 21 August 2019 - 3:03pm

Today we will learn how to migrate content from LibreOffice Calc and Microsoft Excel files into Drupal using the Migrate Spreadsheet module. We will give instructions on getting the module and its dependencies. Then, we will present how to configure the module for spreadsheets with or without a header row. There are two example migrations: images and paragraphs. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The image migration uses a Microsoft Excel file as source. The paragraph migration uses a LibreOffice Calc file as source. The CSV migration is a backup in case the Google Sheet is not available. To execute the last one you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module using composer: composer require drupal/migrate_spreadsheet:^1.0. This module depends on the PHPOffice/PhpSpreadsheet library and many PHP extensions including ext-zip. Check this page for a full list of dependencies. If any required extension is missing the installation will fail. If your Drupal site is not composer-based, you will not be able to use Migrate Spreadsheet, unless you go around a lot of hoops.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Microsoft Excel and LibreOffice Calc migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Understanding the source document and plugin configuration

In any migration project, understanding the source is very important. For Microsoft Excel and LibreOffice Calc migrations, the primary thing to consider is whether or not the file contains a row of headers. Also, a workbook (file) might contain several worksheets (tabs). You can only migrate from one worksheet at a time. The example documents have two worksheets: UD Example Sheet and Do not peek in here. We are going to be working with the first one.

The spreadsheet source plugin exposes seven configuration options. The values to use might change depending on the presence of a header row, but all of them apply for both types of document. Here is a summary of the available configurations:

  • file is required. It stores the path to the document to process. You can use a relative path from the Drupal root, an absolute path, or stream wrappers.
  • worksheet is required. It contains the name of the one worksheet to process.
  • header_row is optional. This number indicates which row containing the headers. Contrary to CSV migrations, the row number is not zero-based. So, set this value to 1 if headers are on the first row, 2 if they are on the second, and so on.
  • origin is optional and defaults to A2. It indicates which non-header cell contains the first value you want to import. It assumes a grid layout and you only need to indicate the position of the top-left cell value.
  • columns is optional. It is the list of columns you want to make available for the migration. In case of files with a header row, use those header values in this list. Otherwise, use the default title for columns: A, B, C, etc. If this setting is missing, the plugin will return all columns. This is not ideal, especially for very large files containing more columns than needed for the migration.
  • row_index_column is optional. This is a special column that contains the row number for each record. This can be used as unique identifier for the records in case your dataset does not provide a suitable value. Exposing this special column in the migration is up to you. If so, you can come up with any name as long as it does not conflict with header row names set in the columns configuration. Important: this is an autogenerated column, not any of the columns that come with your dataset.
  • keys is optional and, if not set, it defaults to the value of row_index_column. It contains an array of column names that uniquely identify each record. For files with a header row, you can use the values set in the columns configuration. Otherwise, use default column titles like A, B, C, etc. In both cases, you can use the row_index_column column if it was set. Each value in the array will contain database storage details for the column.

Note that nowhere in the plugin configuration you specify the file type. The same setup applies for both Microsoft Excel and LibreOffice Calc files. The library will take care of detecting and validating the proper type.

Migrating spreadsheet files with a header row

This example is for the paragraph migration and uses a LibreOffice Calc file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

book_id, book_title, Book author B10, The definitive guide to Drupal 7, Benjamin Melançon et al. B20, Understanding Drupal Views, Carlos Dinarte B30, Understanding Drupal Migrations, Mauricio Dinarte source: plugin: spreadsheet file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods worksheet: 'UD Example Sheet' header_row: 1 origin: A2 columns: - book_id - book_title - 'Book author' row_index_column: 'Document Row Index' keys: book_id: type: string

The name of the plugin is spreadsheet. Then you use the file configuration to indicate the path to the file. In this case, it is relative to the Drupal root. The UD Example Sheet is set as the worksheet to process. Because the first row of the file contains the header rows, then header_row is set to 1 and origin to A2.

Then specify which columns to make available to the migration. In this case, we listed all of them so this setting could have been left unassigned. It is better to get into the habit of being explicit about what you import. If the file were to change and more columns were added, you would not have to update the file to prevent unneeded data to be fetched. The row_index_column is not actually used in the migration, but it is set to show all the configuration options in the example. The values will be 1, 2, 3, etc.  Finally, the keys is set the column that serves as unique identifiers for the records.

The rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows the process and destination sections for the LibreOffice Calc paragraph migration.

process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: 'Book author' destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: ud_book_paragraphMigrating spreadsheet files without a header row

Now let’s consider an example of a spreadsheet file that does not have a header row. This example is for the image migration and uses a Microsoft Excel file. The following snippets shows the UD Example Sheet worksheet and the configuration of the source plugin:

P01, https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg P02, https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg P03, https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg source: plugin: spreadsheet file: modules/custom/ud_migrations/ud_migrations_sheets_sources/sources/udm_book_paragraph.ods worksheet: 'UD Example Sheet' header_row: 1 origin: A2 columns: - book_id - book_title - 'Book author' row_index_column: 'Document Row Index' keys: book_id: type: string

The plugin, file, amd worksheet configurations follow the same pattern as the paragraph migration. The difference for files with no header row is reflected in the other parameters. header_row is set to null to indicate the lack of headers and origin is to A1. Because there are no column names to use, you have to use the ones provided by the spreadsheet. In this case, we want to use the first two columns: A and B. Contrary to CSV migrations, the spreadsheet plugin does not allow you to define aliases for unnamed columns. That means that you would have to use A, B in the process section to refer to these columns.

row_index_column is set to null because it will not be used. And finally, in the keys section, we use the A column as the primary key. This might seem like an odd choice. Why use that value if you could use the row_index_column as the unique identifier for each row? If this were an isolated migration, that would be a valid option. But this migration is referenced from the node migration explained in the previous example. The lookup is made based on the values stored in the A column. If we used the index of the row as the unique identifier, we would have to update the other migration or the lookup would fail. In many cases, that is not feasible nor desirable.

Except for the name of the columns, the rest of the migration is almost identical to the CSV example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process and destination section for the Microsoft Excel image migration.

process: psf_destination_filename: plugin: callback callable: basename source: B # This is the photo URL column. destination: plugin: 'entity:file'

Refer to this entry to know how to run migrations that depend on others. In this case, you can execute them all by running: drush migrate:import --tag='UD Sheets Source'. And that is how you can use Microsoft Excel and LibreOffice Calc files as the source of your migrations. This example is very interesting because each of the migration uses a different source type. The node migration explained in the previous post uses a Google Sheet. This is a great example of how powerful and flexible the Migrate API is.

What did you learn in today’s blog post? Have you migrated from Microsoft Excel and LibreOffice Calc files before? If so, what challenges have you found? Did you know the source plugin configuration is not dependent on the file type? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Categories: Drupal

Government regulation means Steam China will be a curated platform

Social/Online Games - Gamasutra - 21 August 2019 - 1:10pm

Valve†™s most anything goes approach to managing a storefront won†™t carry over to the recently announced Steam. ...

Categories: Game Theory & Design

Agaric Collective: Migrating Google Sheets into Drupal

Planet Drupal - 21 August 2019 - 10:05am

Today we will learn how to migrate content from Google Sheets into Drupal using the Migrate Google Sheets module. We will give instructions on how to publish them in JSON format to be consumed by the migration. Then, we will talk about some assumptions made by the module to allow easier plugin configurations. Finally, we will present the source plugin configuration for Google Sheets migrations. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD Google Sheets, Microsoft Excel, and LibreOffice Calc source migration whose machine name is ud_migrations_sheets_sources. It comes with four migrations: udm_google_sheets_source_node.yml, udm_libreoffice_calc_source_paragraph.yml, udm_microsoft_excel_source_image.yml, and udm_backup_csv_source_node.yml. The last one is a backup in case the Google Sheet is not available. To execute it you would need the Migrate Source CSV module.

You can get the Migrate Google Sheets module and its dependency using composer: composer require drupal/migrate_google_sheets:^1.0'. It depends on Migrate Plus. Installing via composer will get you both modules.  If your Drupal site is not composer-based, you can download them manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration. The destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain Google Sheets migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from different sources. In the next article, two of the migrations will be explained. They read from Microsoft Excel and LibreOffice Calc files.

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from Google Sheets

In any migration project, understanding the source is very important. For Google Sheets, there are many details that need your attention. First, the module works on top of Migrate Plus and extends its JSON data parser. In fact, you have to publish your Google Sheet and consume it in JSON format. Second, you need to make the JSON export publicly available. Third, you must understand the JSON format provided by Google Sheets and the assumptions made by the module to configure your fields properly. Specific instructions for Google Sheets migrations will be provided. That being said, everything explained in the JSON migration example is applicable in this case too.

Publishing a Google Sheet in JSON format

Before starting the migration, you need the source from where you will extract the data. For this, create a Google Sheet document. The example will use this one:

https://docs.google.com/spreadsheets/d/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/edit#gid=0

The 1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk value is the worksheet ID which will be used later. Once you are done creating the document, you need to publish it so it can be consumed by the Migrate API. To do this, go to the File menu and then click on Publish to the web. A modal window will appear where you can configure the export. Note that it is possible to publish the Entire document or only some of the worksheets (tabs). The example document has two: UD Example Sheet and Do not peek in here. Make sure that all the worksheets that you need are published or export the entire document. Unless multiple urls are configured, a migration can only import from one worksheet at a time. If you fetch from multiple urls they need to have homogeneous structures. When you click the Publish button, a new URL will be presented. In the example it is:

https://docs.google.com/spreadsheets/d/e/2PACX-1vTy2-CGzsoTBkmvYbolFh0UDWenwd9OCdel55j9Qa37g_earT1vA6y-6phC31Xkj8sTWF0o6mZTM90H/pubhtml

The previous URL will not be used. Publishing a document is a required step, but the URL that you get should be ignored. Note that you do not have to share the document. It is fine that the document is private to you as long as it is published. It is up to you if you want to make it available to Anyone with the link or Public on the web and potentially grant edit or comment access. The Share setting does not affect the migration. The final step is getting the JSON representation of the document. You need to assemble a URL with the following pattern:

http://spreadsheets.google.com/feeds/list/[workbook-id]/[worksheet-index]/public/values?alt=json

Replace the [workbook-id] by worksheet ID mentioned at the beginning of this section, the one that is part of the regular document URL, not the published URL. The worksheet-index is an integer number starting that represents the order in which worksheets appear in the document. Use 1 for the first, 2 for the second, and so on. This means that changing the order of the worksheets will affect your migration. At the very least, you will have to update the path to reflect the new index. In the example migration, the UD Example Sheet worksheet will be used. It appears first in the document so worksheet index is 1. Therefore, the exported JSON will be available at the following URL:

http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json

Understanding the published Google Sheet JSON export

Take a moment to read the JSON export and try to understand its structure. It contains much more data than what you need. The records to be imported can be retrieved using this XPath expression: /feed/entry. You would normally have to assign this value to the item_selector configuration of the Migrate Plus’ JSON data parser. But, because the value is the same for all Google Sheets, the module takes care of this automatically. You do not have to set that configuration in the source section. As for the data cells, have a look at the following code snippet to see how they appear on the export:

{ "feed": { "entry": [ { "gsx$uniqueid": { "$t": "1" }, "gsx$name": { "$t": "One Uno Un" }, "gsx$photo-file": { "$t": "P01" }, "gsx$bookref": { "$t": "B10" } } ] } }

Tip: Firefox includes a built-in JSON document viewer which helps a lot in understanding the structure of the document. If your browser does not include a similar tool out of the box, look for one in their extensions repository. You can also use a file formatter to pretty print the JSON output.

The following is a list of headers as they appear in the Google Sheet compared to how they appear in the JSON export:

  • unique_id appears like gsx$uniqueid.
  • name appears like gsx$name.
  • photo-file appears like gsx$photo-file.
  • Book Ref appears like gsx$bookref.

So, the header name from Google Sheet gets transformed in the JSON export. They get a prefix of gsx$ and the header name is transformed to all lowercase letters with spaces and most special characters removed. On top of this, the actual cell value, that you will eventually import, is in a $t property one level under the header name. Now, you should create a list of fields to migrate using XPath expressions as selectors. For example, for the Book Ref header, the selector would be gsx$bookref/$t. But that is not the way to configure the Google Sheets data parser. The module makes some assumptions to make the selector clearer. So, the gsx$ prefix and /$t hierarchy are assumed. For the selector, you only need to use the transformed name. In this case: uniqueid, name, photo-file, and bookref.

Configuring the Migrate Google Sheets source plugin

With the JSON export of the Google Sheet and the list of transformed header names, you can proceed to configure the plugin. It will be very similar to configuring a remote JSON migration. The following code snippet shows source configuration for the node migration:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: google_sheets urls: 'http://spreadsheets.google.com/feeds/list/1YVJt9isPNjkUNHf3YgoTx38r04TwqRYnp1LFrik3TAk/1/public/values?alt=json' fields: - name: src_unique_id label: 'Unique ID' selector: uniqueid - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo-file - name: src_book_ref label: 'Book paragraph ID' selector: bookref ids: src_unique_id: type: integer

You use the url plugin, the http fetcher, and the google_sheets parser. The latter is provided by the module. The urls configuration is set to the exported JSON link. The item_selector is not configured because the /feed/entry value is assumed. The fields are configured as in the JSON migration with the caveat of using the transformed header values for the selector. Finally, you need to set the ids key to a combination of fields that uniquely identify each record.

The rest of the migration is almost identical to the JSON example. Small changes were made to prevent machine name conflicts with other examples in the demo repository. For reference, the following snippet shows part of the process, destination, and dependencies section for the Google Sheets migration.

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_microsoft_excel_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_microsoft_excel_source_image - udm_libreoffice_calc_source_paragraph optional: []

Note that the node migration depends on an image and paragraph migration. They are already available in the example. One uses a Microsoft Excel file as the source while the other a LibreOffice Calc document. Both of these migrations will be explained in the next article. Refer to this entry to know how to run migrations that depend on others. For example, you can run: drush migrate:import --tag='UD Sheets Source'.

What did you learn in today’s blog post? Have you migrated from Google Sheets before? If so, what challenges have you found? Did you know the procedure to export a sheet in JSON format? Did you know that the Migrate Google Sheets module is an extension of Migrate Plus? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Read more and discuss at agaric.coop.

Categories: Drupal

Aesthetic-Driven Development: creating Merchant of the Skies from announcement to Early Access launch - by Vladimirs Slavs

Gamasutra.com Blogs - 21 August 2019 - 7:43am
In October 2018, I’ve told her, Helen, to just draw whatever she wants and put it on twitter. Ten months later, we have published our game, Merchant of the Skies, our best game so far, into early access. It is a direct result of that decision.
Categories: Game Theory & Design

Pages

Subscribe to As If Productions aggregator