improve AWS EFS sparse file throughput with fpsync

After running into an issue when copying thousands of files at one to EFS, I came across https://aws.amazon.com/premiumsupport/knowledge-center/efs-troubleshoot-slow-performance/

Let’s look at some benchmarks – the issue was that this job was taking 30+ seconds which would have timed out the HTTP server and it’s not yet a background job. It unzips a file to a temporary directory (not in EFS), does some validation, then copies the contents to EFS. The zip in question was 3000 files, around ~150MB extracted.

# find /big_dir | wc -l
3098
# time cp -R /big_dir /efs_dir

real    0m37.957s
user    0m0.052s
sys     0m0.960s

Ouch, that’s not good. We could try rsync, maybe that will help:

rsync -r /big_dir /efs_dir

real    1m10.210s
user    0m0.931s
sys     0m1.744s

Even longer! Why is this? It’s because:

Metadata I/O occurs if your application performs metadata-intensive, operations such as, "ls," "rm," "mkdir," "rmdir," "lookup," "getattr," or "setattr", and so on. Any operation that requires the system to fetch for the address of a specific block is considered to be a metadata-intensive workload.

rsync is also checking the destination file to see if it needs to sync it, which causes a bottleneck. So plain rsync and cp aren’t an option.

The issue is that Elastic File System is not built for serial operations. That is, copying a file, waiting, and copying the next one. EFS must replicate all the files to multiple locations so there is a delay while it does so. There is also some overhead from NFS, as each filesystem operation is a network call. What EFS is designed for is actually parallel operations. But rsync or cp can’t run in parallel, so you’ll need to manually batch up your files or use this tool that was referenced in the document above called fpsync (Filesystem partitioner sync).

What fpsync can do is split a directory of files up into chunks, and then send those contents in parallel via rsync. This is also possible with GNU Parallel but you’d have to write your own script. fpsync was available on CentOS, and probably many other distributions. Let’s run it out of the box:

fpsync /big_dir /efs_dir

real    0m59.790s
user    0m1.975s
sys     0m3.925s

Not much of an improvement…but why? Because fpsync doesn’t run in parallel by default, and you have to tweak it a bit. Let’s process 100 files at a time using 10 concurrent runners:

# time fpsync -f 100 -n 10 -v /big_dir /efs_dir
1662569967 Info: Run ID: 1662569967-43986
1662569967 ===> Analyzing filesystem...
1662569968 <=== Fpart crawling finished
1662569980 <=== Parts done: 29/29 (100%), remaining: 0
1662569980 <=== Time elapsed: 13s, remaining: ~0s (~0s/job)
1662569980 <=== Fpsync completed without error in 13s.

real    0m13.467s
user    0m2.086s
sys     0m4.400s

Much better! But let’s try more concurrent runners. Since we had 3000 files, there would have been a queue in our last command (100*10 = 1000). So let’s run 50 batches of 50 files each:

# fpsync -f 50 -n 50 -v /big_dir /efs_dir
1662570120 Info: Run ID: 1662570120-51913
1662570120 ===> Analyzing filesystem...
1662570122 <=== Fpart crawling finished
1662570129 <=== Parts done: 58/58 (100%), remaining: 0
1662570129 <=== Time elapsed: 9s, remaining: ~0s (~0s/job)
1662570129 <=== Fpsync completed without error in 9s.

real    0m8.903s
user    0m2.093s
sys     0m4.868s

So, the more concurrent copy operations we can run, the better.

On a regular disk this wouldn’t have any effect since the filesystem operations are negligible and your only bottleneck is the disk speed. It might even slow it down. There may be some other options inside of fpsync that would speed it up even more. What about rsync --inplace? This would eliminate a step that rsync would usually take, which is to create a new file, then rename it.

#  time fpsync -o "--inplace" -f 100 -n 50 -v /big_dir /efs_dir
1662584365 Info: Run ID: 1662584365-126224
1662584365 ===> Analyzing filesystem...
1662584367 <=== Fpart crawling finished
1662584371 <=== Parts done: 29/29 (100%), remaining: 0
1662584371 <=== Time elapsed: 6s, remaining: ~0s (~0s/job)
1662584371 <=== Fpsync completed without error in 6s.

real    0m5.872s
user    0m1.634s
sys     0m3.438s

Running batches of 100 brought it down to under 6s. After that it started to get slower. Also running a huge number of rsyncs and small batches got slower. This is likely due to the system itself – after all, it’s running 250+ instance of rsync.

Attaching Rules conditions to a config entity

Enhancing modules with Rules-based conditions was very easy in D7. Using hook_default_rules_configuration we could dynamically generate a bunch of rules called mymodule_rule_[some_key], use rules_ui()->config_menu() to add the menu items for the Rules admin UI, then invoke the generated components to evaluate conditions. Every entity or option would have its own Rules component that we can edit and add arbitrary conditions. Some examples of this in D7 were:

  • Payment methods (Ubercart/Commerce)
  • Coupons
  • Tax rules
  • Block visibility
  • User access or eligibility

And anything where you could not possibly know of the conditions that would be needed. Some of the above were changed to use Core conditions in D8, but that didn’t cut it for our use case since I could not possibly write a new condition for every requirement that came up. Real life examples of these are:

  • A user can only claim a certain kind of course credit when the credit code on the course contains specific characters and the user is from Florida.
  • The user can only use the payment method when there is a valid role attached to the user and specific products are in the cart.
  • A user is not eligible to receive a certain type of credit when they are eligible to receive a certain type of credit.
  • A quiz taker can only see correct answers once two weeks have passed and the user exhausted two attempts.

These aren’t out of the ordinary and we would be writing custom PHP if/else trees every day. For a SaaS-like product this is not ideal.

It’s a little trickier to add arbitrary conditions to entities but well worth it in the end. Rules provides a test module that you can look at: rules_test_ui_embed. This example illustrates using 1 rule component embedded into a page. But we need to build Rules into all instances of a configuration entity.

Rules provides an interface RulesUiComponentProviderInterface that we can use to store component configuration on our entity types. This was added in https://www.drupal.org/project/rules/issues/2659016 and https://www.drupal.org/project/rules/issues/2671056 but so far, there don’t seem to be any contributed modules that implement this! Rules does use it for its own action and condition components.

There is documentation for extending Rules with new conditions and actions, but it is pretty lacking around integration. There is some embedded developer documentation so let’s take a look.

If we look at rules_test_ui_embed we see that there is some sort of plugin file – rules_test_ui_embed.rules_ui.yml. That must define something!

rules_test_ui_embed.rules.ui.yml:

The above defines a Rules UI plugin which will create routes on rules_test_ui_embed.settings .The configuration for the Rules component will be saved to rules_test_ui_embed.settings under the conditions key. But that doesn’t work for us, we need to have multiple components on multiple entities.

There’s another parameter in RulesUiConfigHandler we can use to allow wildcard editing of components: config_parameter

It appears that config_parameter and config_key can be used to dynamically set which configuration object and key will be updated. With a little trial and error I applied it to Quiz feedback types. Feedback types hold sets of review options that display feedback to quiz takers after they answer a question or finish an entire quiz. They can also be used for post-review feedback, in the case of revisiting the quiz after 2 weeks. Only seeing correct answers after 3 attempts, only seeing instructor feedback once given a role, etc…

Let’s assume that we already have a QuizFeedbackType entity to allow creation of custom feedback “times”, and all the edit forms are already set up. We want to add conditions to each feedback type so that we can conditionally display their items. In Quiz we have two built in: “Question” and “End”.

Define route and *.rules_ui.yml

This will indicate that we want Rules UI functionality appended to a route that we will also create. It will also tell Rules that we want the component to be saved onto the object loaded from the quiz_feedback_type parameter. Note how _rules_ui option on the route matches the plugin name defined in quiz_rules.ui.yml:

quiz.rules.ui.yml:

quiz.routing.yml:

Define new form for editing a component

This is a normal form that extends ConfigFormBase, but is provided with a Rules UI handler from the plugin definition that matches the route above. Most of this code is copied from rules_test_ui_embed:

QuizFeedbackConditionsForm.php

In buildForm we take in the Rules UI handler and use it to generate the condition form. In submitForm, the Rules UI handler will notify our Rules component “provider” that there is a component that has to be saved.

Implement RulesUiComponentProviderInterface

The rulesUiHandler from above requires the entity type to handle getting the Rules component and saving it onto itself since we are not specifying a static config_name or config_key We add the component property to config_export, then we implement RulesUiComponentProviderInterface and implement the 2 methods:

  • In getComponent() we check to see if the entity already has conditions, and return a RulesComponent. If it does not, we provide a default that intakes a QuizResult entity to evaluate.
  • In updateFromComponent(), we get the RulesComponent and store it on the entity.

And there it goes, components being attached to entities.

Now that the components are stored on an entity that implements RulesUiComponentProviderInterface, we can invoke the rule in our code to validate the conditions:

Reference: https://www.drupal.org/project/rules/issues/3117749

drush uli gets “access denied” using vagrant

I noticed randomly “drush uli” would just result in “access denied” when Drupal was inside of a Vagrant machine.

The funny this was, if I waited a little bit, and tried the user login link again, it worked fine.

The problem is the Vagrant machine’s clock may be only a few seconds ahead of your local machine.

So the token that uli generates isn’t valid until then.

Solution: install “ntp” on both your host and inside the Vagrant machine.

consider versioning your .drush directory

Our team uses Drush frequently during the entire development workflow for doing things like grabbing database dumps of sites and running commands – drush make, registry rebuild, custom company-specific ones, etc. – and in the past everyone would have to manually download or copy them to their .drush.

Now, we version the .drush directory, so when a new developer onboards, they can just checkout the .drush directory from version control.

This is incredibly useful for

You can build a very powerful devops toolkit across all team members since everyone will have the same Drush setup!

a faster alternative to sql-sync

Where I work I probably load environments about 50 times a day. Testing bug fixes, data migrations, reproducing errors, failure analysis, and so on.

Even if I can save 30 seconds with an automated database reload process, it will add up.

There’s been work on improving drush sql-sync, including https://drupal.org/project/drush_sql_sync_pipe

The bottleneck is that drush sql-sync works with temporary files – meaning it has to:

  1. Connect to the remote machine
  2. Perform a sql-dump to a file on the remote machine and compress it
  3. Transfer that file to your machine
  4. Restores the dump to database

The problem with this is that each step is executed consecutively. It would be better if all these steps were performed concurrently. Drush defaults to this method because it is compatible with most systems. If you’re a power user though, you may want a find a faster solution.

What we’d like to do is

  1. Connect to the remote machine
  2. Perform these steps at the same time
    1. Read the file remotely
    2. Compress on the fly
    3. Stream it to your local machine
    4. Uncompress on the fly
    5. Pipe sql to database

I wrote this little script that accomplishes just that and a little extra for dumping locally. The key is piping data instead of saving it temporarily. Note that this only works on Linux/Mac.

Put this script somewhere (maybe ~/bin) and chmod a+x it.

From within your site directory, run fastdump @someAlias

This will

  1. Delete all the local tables (to ensure tables that don’t exist in your source are gone)
  2. Restore the database from an alias
  3. Run updates

But quickly! The next step for this would be making it into a Drush command instead of a shell script.

protecting content profiles in drupal 6

Content profiles in Drupal 6, by default are plain old nodes, so if they are published everyone will have access to them.

This sets up a realm and restricts it to the profile owner.

Pulled from https://drupal.org/node/837220#comment-3147640 – but this is the gist of it.

Then rebuild node access.