Surgery

Woke up at 4:45am to get to Thomas Jefferson University Hospital, where my surgery was scheduled for 7:30am. There was an emergency surgery in front of me, so I did end up waiting for a bit. My  surgeon came in to update me and that’s when I officially found out that after the workups I did on the 13th, that I was definitely going in for Lefort 1 Osteomoty, no splint, no graft, and no ligatures. I was kinda psyched about that because it meant a shorter recovery time.

mom and me – before
before – just a little bit, right?

Said my goodbyes and got carried off to the OR holding room. Got reaffirmed every step of the way about my name and date of birth, to make sure that I was getting the right surgery 🙂

The staff at Jefferson were all very nice, very professional. I talked to the anaesthesiologist, the person who was going to intubate me during the surgery, chief surgeon, and some other staff. Got wired up for IV, which I had a feeling was going to be the most painful part of this whole ordeal from reading other blogs. I forgot to take my glasses, but I don’t think they would have let me have them anyways, so I listened to Family Feud on the TV.

Relaxed for about an hour, then got wheeled to the OR.

I only remembered the OR for a few minutes. Saw the giant operating lights, the 10 or so people that were all getting ready to cut into me. Got lifted onto the operating table, which had wings to support all my limbs. Oxygen mask got placed over my head and that’s all I remember before waking up.

Getting ready

After being in braces for a year, it’s time to correct my anterior open bite with jaw surgery (Lefort 1 Osteomoty)

Here’s Day -365, which was on June 16th, 2013. I just got the braces put on.

Day -365

Here’s October 9th, 2014, getting my surgical lugs on. Usually they are rubber, but going into surgery they have to be metal.

it’s really not that bad

Look at how much my teeth came together! But there’s still work to be done to get them safely down to my bottom teeth.

I’m scheduled in at 7:30 tomorrow for surgery. Nervous, for what I feel is no reason. But my brain still thinks I should panic. I mentally prepared for all of this, and I’m somewhat excited just to get it over with.

drush uli gets “access denied” using vagrant

I noticed randomly “drush uli” would just result in “access denied” when Drupal was inside of a Vagrant machine.

The funny this was, if I waited a little bit, and tried the user login link again, it worked fine.

The problem is the Vagrant machine’s clock may be only a few seconds ahead of your local machine.

So the token that uli generates isn’t valid until then.

Solution: install “ntp” on both your host and inside the Vagrant machine.

dynamic drush alias files

Alias files are useful. If you have many sites on a remote server, instead of manually adding them to your alias file every time, write some code to automatically generate them.

Now whenever a site is configured on your target server, you will automatically have the alias!

consider versioning your .drush directory

Our team uses Drush frequently during the entire development workflow for doing things like grabbing database dumps of sites and running commands – drush make, registry rebuild, custom company-specific ones, etc. – and in the past everyone would have to manually download or copy them to their .drush.

Now, we version the .drush directory, so when a new developer onboards, they can just checkout the .drush directory from version control.

This is incredibly useful for

You can build a very powerful devops toolkit across all team members since everyone will have the same Drush setup!

a faster alternative to sql-sync

Where I work I probably load environments about 50 times a day. Testing bug fixes, data migrations, reproducing errors, failure analysis, and so on.

Even if I can save 30 seconds with an automated database reload process, it will add up.

There’s been work on improving drush sql-sync, including https://drupal.org/project/drush_sql_sync_pipe

The bottleneck is that drush sql-sync works with temporary files – meaning it has to:

  1. Connect to the remote machine
  2. Perform a sql-dump to a file on the remote machine and compress it
  3. Transfer that file to your machine
  4. Restores the dump to database

The problem with this is that each step is executed consecutively. It would be better if all these steps were performed concurrently. Drush defaults to this method because it is compatible with most systems. If you’re a power user though, you may want a find a faster solution.

What we’d like to do is

  1. Connect to the remote machine
  2. Perform these steps at the same time
    1. Read the file remotely
    2. Compress on the fly
    3. Stream it to your local machine
    4. Uncompress on the fly
    5. Pipe sql to database

I wrote this little script that accomplishes just that and a little extra for dumping locally. The key is piping data instead of saving it temporarily. Note that this only works on Linux/Mac.

Put this script somewhere (maybe ~/bin) and chmod a+x it.

From within your site directory, run fastdump @someAlias

This will

  1. Delete all the local tables (to ensure tables that don’t exist in your source are gone)
  2. Restore the database from an alias
  3. Run updates

But quickly! The next step for this would be making it into a Drush command instead of a shell script.

don’t kill your live site with a sql-sync

We have a shared alias file that represents every site that we work with. For example


@abcstage
@abctest
@abclive

are all valid aliases. Developers would have access to stage and test, while live only works for privileged users.

But, we still want to make sure that no funny business goes on.

Create a file, ~/.drush/policy.drush.inc

This will ensure that nobody can accidentally sql-sync to a live site. You can adjust the criteria as need be.