Today I Learned how to Secure the Delayed Job Page with Spree Users

A client has an online store that is powered by an older version of Spree. I’m in the process of upgrading it and adding features to it at the same time. It’s a slow process as upgrading to newer versions of Spree, which also requires upgrading Ruby and Rails, is no easy task.

One customization the customer has is a delayed job that fires when a order is complete. The delayed job preforms some tasks that can take a while hence why they are done in a separate process after the order is completed.

Recently there was some issues with the delayed job task that forced me look how the delayed jobs where setup and managed. I found a couple things should be fixed, such as not deleting failed jobs. Not being able to find the error message for failed job was pain.

The most serious issue I discovered was in the page to view the delayed jobs, which used the Delayed Job Web gem. Access to the page was restricted by a password (good) but was only done use HTTP basic auth (bad) and had a hard coded password from a previous developer (bad).

Reviewing the Delayed Job Web documentation I found that it does support authenticating with Devise. That was good as Spree also uses Devise for authentication. After some research, trail and error I found that the following will work only allow Spree Admin’s to access the delayed job page:

# config/routes.rb

Spree::Core::Engine.routes.prepend do
  authenticated :spree_user, -> spree_user { spree_user.admin? } do
    mount DelayedJobWeb, at: "/delayed_job"
  end
end

Notice instead of using just :user we need to use :spree_user. Not if someone tries to view the delayed job page when not logged in or as an non-admin Spree user then a 404 error is returned.

Page not found error message if not logged in as Admin.

If logged in as a Spree admin then you can view the page as normal.

Delayed job page displayed if logged in as Spree admin.

I struggled to create unit tests for the above. At first I just created some RSpec route tests:

# spec/routing/delayed_job_spec.rb

require 'rails_helper'

describe 'routes for delayed jobs', type: :routing do
  routes { Spree::Core::Engine.routes }

  context 'user not logged in' do
    it 'they cannot see the route' do
      expect(:get => "/delayed_job").to_not be_routable
      expect(:post => "/delayed_job").to_not be_routable
    end
  end

  context 'user logged in' do
    before(:each) do
      login_user
    end

    it 'they cannot see the route' do
      expect(:get => "/delayed_job").to_not be_routable
      expect(:post => "/delayed_job").to_not be_routable
    end
  end

  context 'user logged in as admin' do
    before(:each) do
      login_admin
    end

    it 'they can see the route' do
      expect(:get => "/delayed_job").to be_routable
      expect(:post => "/delayed_job").to be_routable
    end
  end
end

Unfortunately that failed with an error:

NoMethodError: undefined method 'authenticate?' for nil:NilClass

Turns out this is a known issue with Rails 3 and Devise as outlined here. So instead I created integration tests for the delayed job security using Cucumber.

# features/delayed_job.feature

@javascript
Feature: Delayed Job

  @allow-rescue
  Scenario: I can't view Delayed Job page if I'm not logged in
    When I visit the delayed job page
    Then I get a 404 error

  @allow-rescue
  Scenario: I can't view Delayed Job page if I'm not an admin
    Given I am logged in as a user
    When I visit the delayed job page
    Then I get a 404 error

  Scenario: I can view Delayed Job page if I'm logged in as admin
    Given I am logged in as an administrator
    When I visit the delayed job page
    Then I can see the delayed job page
# features/step_definitions/delayed_job_steps.rb

When(/^I visit the delayed job page$/) do
  visit "/delayed_job"
end

Then(/^I get a 404 error$/) do
  expect(page).to have_text('Routing Error No route matches [GET] "/delayed_job"')
end

Then(/^I can see the delayed job page$/) do
  expect(page).to have_text('The list below shows an overview of the jobs in the delayed_job queue')
end

While not unit tests having integration tests is better then nothing.

As I continue to upgrade the client’s Spree store I’ll eventually replace the unsupported delayed_job gem with something that is. Perhaps Active Job or Sidekiq.

P.S. – One of the first songs that come up when I searched for songs about jobs. Never heard it before but it made me chuckle.

Take this job and shove it
I ain’t workin’ here no more
My woman done left and took all the reasons
I was working for
Ya better not try to stand in my way
As I’m walkin’, out the door
Take this job and shove it
I ain’t workin’ here no more

Posted in Today I Learned | Tagged , , , , , | Comments Off on Today I Learned how to Secure the Delayed Job Page with Spree Users

My Takeaway from Reading David and Goliath

David and Goliath

Book: David and Goliath
Author: Malcolm Gladwell

The subtitle for the book is “Underdogs, Misfits, and the Art of Battling Giants” and as you would expect it has many examples of an underdog beating a giant. Often the underdogs have to use unconventional approaches as facing the giant head one would result in their defeat.

An item that almost became my takeaway is that sometimes the underdog is not really the underdog. Instead sometimes the underdog is more powerful then they appear. A good example of this is the title story of David and Goliath as examined by Gladwell. He argues that David was armed with a superior weapon, the sling. The sling with it’s much longer range and stopping power then the sword wielded by Goliath. Basically David brought a gun to a knife fight and unsurprisingly won.

My actual takeaway from this book is to watch for the inverted U curve.

If you are at one end of the curve, say the right side, the decreasing whatever the amount is by a little gets you better results. The problem is if you decrease the amount too much you actually get just as bad results as if you had too much.

The example in David and Goliath is classroom size. Large class sizes, say greater then 30, resulted in poor grades (results) and when the class size was reduced the students grades got better. The problem is if you make the class size too small, say less then 18, the students grades are just a worse if the class is too large. What you ideally want is the optimal class size of somewhere around 18 students. Enough for students to group and and interact but not too many to overwhelm the teacher.

My takeaway is to keep an eye out for the inverted U curve in my own life. Pay attention when I’ve reached the optimal amount of work, play, exercise, vegetables, etc. Also for my professional life. Like the optimal amount of blog posts per month?

Posted in Takeaways | Tagged , , | Comments Off on My Takeaway from Reading David and Goliath

Generate a Todo List in Standard (Rubocop)

The older I get the more I appreciate code linters. Something that can detect and often correct my formatting errors? Great! One less thing I have to worry about. Then I can spend my time on more important tasks such fixing the bug, adding a new feature, or uniting/conquering the world.

The default linter for Ruby is Rubocop. It works great but I find it’s default rules to restrictive. You can change the defaults but that is time consuming and confusing. At least confusing to me. While searching for a Rubocop config that I liked I came across Standard.

Standard is a wrapper for Rubocop but had defaults that like. It’s goal is to remove thinking about linting. Just install it and it works. No configuration setup and reasonable defaults. Great! Looks like everything I wanted.

Well, almost everything. Standard did not have a way to generate a Todo file. A file that lists all the errors in an existing project that we want to ignore until we get a chance to fix them.

Why would you want this? Well if you are working with Corgibytes you end up working on legacy projects. Projects with poorly written code with little to no tests and no automated build.

The first thing we do when inheriting a legacy code base is to baseline it. Take note of code coverage, which automated tests are failing, known bugs, and of course what linting errors exist. Once we have the baseline numbers we make sure any changes only improve the code, not make things worse. For example, we always want the code coverage number to stay the same or go up, it should never go down. For linting errors the number should always be decreasing.

Now Standard 0.4.0 has a way to generate a baseline in the form of a Todo file. Which means you can now incorporate Standard into the you build procedure. For example if you run Standard on my old website it will spit out lots of errors:

Standard Lots of Issues

To create the baseline for the linter generate the Todo file run Standard with the following command:

standardrb --generate-todo

This will generate a .standard_todo.yml file that contains a list of all the files with errors in them. For my old website there are lots of errors.

Now when we run Standard we don’t get any errors. That said we do still get a nice message reminding us to remove files from the Todo file.

So go ahead and try it out. If you have any feedback please let me know by opening an issue. Special thank to TestDouble and Searls for creating Standard and working with me on the pull request.

P.S. – Below song is not related to the post but the current state of Alberta. Rough couple of months for many due to the economic fallout from the virus shutdown and the price of oil. That I, like many, are missing travelling. Who would have thought I would miss driving?

Hurtin’ albertan with nothing more to lose
Too much oil money, not enough booze
East of the rockies and west of the rest
Do my best to do my damnedest and that’s just about all I guess

Posted in Code Examples, Software Development | Tagged , , | Comments Off on Generate a Todo List in Standard (Rubocop)

A Pleasant Development Environment featuring Docker and Rails

With the current virus issues I forgot to publicize that my presentation about creating pleasant development environments was posted along with an example of Dockerizing an existing Rails application. You can find the slides for the talk here. Finally you can find a ready to use existing Rails template that uses Docker here.

Thanks to YEGRB for allowing me to present and to Will for his awesome editing skills. Also thank you to Corgibytes for feedback on draft presentation.

Posted in Code Examples, Presentations | Tagged , , , , , | Comments Off on A Pleasant Development Environment featuring Docker and Rails

A Rails 6 Template and Then Some

I appreciate people posting getting started examples and templates online. They are good for getting started and playing with a new technology. The problem is the templates are usually not production ready. They are also missing a bunch of the developer tools and best practices, such as linters and automated builds, that every project will eventually need.

Screen shot of Rails Template GitHub page.

So I created a Rails 6 template that includes many of the tools I want on a Rails project. I wanted the template to be easy to use but also contain all the tools I wanted. This includes a Docker container, a linter, a type checker, and a build scripts. Actually the template has two build scripts, one for GitHub Actions and one for GitLab CI. The Docker script will build both development and production images.

Finally the template is deployed with production settings to the new Render hosting. You can see it here.

Let me know what you think, suggest improvements, or report a bug by opening an issue. Pull requests are also accepted.

Posted in Software Development | Tagged , | Comments Off on A Rails 6 Template and Then Some

Today I Learned Altering a SQL Column Removes it’s Default Value

This one I actually already knew but temporarily forgot about it so got to relearn it. In MySQL, and many other databases, updating the column with an ALTER statement will remove any properties not explicitly listed.

For example, say you want to update an MySQL database to support utf8mb4. This requires updating the existing string columns to utf8mb4 which my co-worker did in a script that looked like:

ALTER TABLE #{table} CHANGE #{column_name} VARCHAR CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci

I reviewed the pull request and didn’t catch the error even thought I been burnt by this alter column issue in the past. I guess my scar must have healed, or at least faded enough that I forgot about it.

The problem was some columns had default values set. Since the default values were not specified when redefining the column they got lost. The script should have looked like the below which I stole from here.

SELECT
  CONCAT(
    COLUMN_NAME,
    ' @new_type',
    IF(IS_NULLABLE='NO', ' NOT NULL ', ' '),
    IF(COLUMN_DEFAULT IS NOT NULL, CONCAT(' DEFAULT ', QUOTE(c.COLUMN_DEFAULT), ' '), ' '),
    IF(COLUMN_COMMENT IS NOT NULL AND COLUMN_COMMENT != '', CONCAT(' COMMENT ', QUOTE(c.COLUMN_COMMENT), ' '), ' '),
    EXTRA
  ) AS s
FROM
  INFORMATION_SCHEMA.COLUMNS c
WHERE
  TABLE_SCHEMA=#{database}
  AND
  TABLE_NAME=#{table}

Luckily we had a bunch of automated tests that caught the problem. Actually, we didn’t catch it right away. There was another issue where our automated CI tests where running the tests against the database without the utf8mb4 migrations. Fortunately we noticed the automated CI test issue, and the missing default values, just before doing the production release.

Lesson learned? Luck comes to those who are prepared (i.e. have automated tests) and who pair when doing complicated risky releases.

P.S. – No lyrical reason for picking this song. It just has the word lucky in it and it’s catchy. I had trouble finding a song about plain old luck and not lucky in love/lust.

Posted in Today I Learned | Tagged , , | Comments Off on Today I Learned Altering a SQL Column Removes it’s Default Value

My Takeaway from Reading Sometimes you Win, Sometimes you Learn

Book: Sometimes you Win, Sometimes you Lean
Author: John C. Maxwell

My takeaway from this book had nothing to do with it’s premise: learning from your mistakes. Instead my takeaway had to do with writing. Specifically make sure your audience can relate to your example.

One of the first examples of making mistake in the book is when the author accidentally brings a loaded gun to the airport. You can read all the details here but in summary he was giving a handgun as a gift while on a trip. He flew home on a private plane where, after they landed, the pilot showed him how to load the gun (this part is only in the book). He put the gun in his briefcase and when he got home forgot about it. His next trip, on a public plane, he took the briefcase with the loaded gun to the airport. You can figure out what happened when he got to security.

After the incident the author appears to suffer no loss from the incident aside from some embarrassment. No charges, monetary loss, or loss of reputation. It is also hard to figure out what he learned from the incident.

I kept wondering who was the target audience for that example? It clearly was not for me but I couldn’t even picture anyone I know relating to it. It’s a shame that example was the at the beginning of the book as later examples are much better. Examples where I felt the lose of the person suffered and the lessons learned as they recovered from their loss.

So my takeaway is to make sure I keep my audience in mind when writing, especially for my examples. Will the reader be able to picture themselves in the example? If not themselves will can the picture someone else? Will the example trigger the emotion(s) I’m hoping for? If I’m writing a technical example what level of developer is it targeted too?

Now I just need to think about who I wrote this blog post for…

Posted in Takeaways | Tagged | Comments Off on My Takeaway from Reading Sometimes you Win, Sometimes you Learn

Today I Learned About GitHub’s Dependabot

Recently I created a CI build for the Introduction to ORM for DBAs presentation example code. One of the reasons I picked this code base was so I could try out Dependabot for the security alerts I’m getting.

Security alert in GitHub.

The security alert is for the ASP.NET Core NuGet package. The same issue is listed multiple times because the code is duplicated several times for the various steps in the example.

List of Security Alerts in GitHub

Viewing more details about the error I see it recommends upgrading the package to 2.0.9 or later.

Security Alert Details

Let’s try the automatic fix and see what happens.

This will create a pull request and kick off an automated build in the Azure Pipeline for this project.

That is no good. It appears that I have a direct reference to EntityFrameworkCore.Design in my project. Let me go look.

There it is. Let’s update it to the latest version of 2.0.x. Now that I think about it I wonder if we can just remove it? Let’s save that for a later commit.

It builds and run on my local machine. Commit out changes and see what the CI build says.

Now we can squash and merge this commit and we are all done. At least for example 1 of 10. I was really hoping Dependabot would auto-magically fix all the broken dependencies but it appears I have some manual work to do. Oh well. Maybe it will work better next time.

P.S. – Robot Rock.

Posted in Today I Learned | Tagged , , , | Comments Off on Today I Learned About GitHub’s Dependabot

Notes on Fixing Ubuntu 18.04 VM not Booting

Some notes so I remember how to fix this problem if it happens again and I don’t waste a bunch of time figuring it out right before a customer production release. Not a great start to my day.

This issue happened to me many moons ago when I first upgraded to VirtualBox 6 and Ubuntu 18. This is the second time so I better write down what I did so I hopefully remember if it happens a third time.

The problem is my Ubuntu VM boots but freezes before getting to the login screen. It will either stay purple but sometimes goes black.

The problem occurred after upgraded the Ubuntu 18.04 guest to kernel 4.15.0-52. I also installed updates on my Windows 8.1 host (I know, I need to upgrade to Windows 10, it’s on the list) but I don’t think that had anything to do with the problem. No recent updates to VirtualBox.

The first step in trying to fix this problem is to boot in safe mode. If you don’t know to boot into safe mode in Ubuntu hold down the left-shift key.

You won’t be able to run any of the diagnostics because VirtualBox can’t remount the file systems as read only.

Just choose resume to continue booting in safe mode which will hopefully show the login screen but with the VirtualBox Guest Additions graphic drivers disabled. This means you will get a 800 x 600 screen that can’t be resized.

Now try to re-install the VirtualBox Guest Additions.

Shut down and restart the VM and see what happens. If it boots then you are good to go. If it does not boot, which it didn’t for me, you can change the VirtualBox graphics controller. Do this by opening up the settings for your VM and select Display.

Most likely your graphics controller will be VMSVGA if you are running a newer version of Linux. Change it to VBoxSVGA and reboot. If it’s already at VBoxSVGA try VMSVGA instead. You can read about the differences between the settings here.

Start up the VM and hopefully it won’t hang.

I’m assuming this graphics driver problem will be fixed by a future release of Ubuntu and/or VirtualBox but since this is the second time it’s happened to me it would not surprise me if it happens again.

I currently don’t notice any difference between VMSVGA and VBoxSVGA but it’s only been an hour or so.

P.S. – I haven’t seen Into the Spider-verse movie yet but my daughter has and really enjoyed it. I really like the animation style, assuming the music video uses clips from the movie.

Then you’re left in the dust, unless I stuck by ya
You’re a sunflower, I think your love would be too much
Or you’ll be left in the dust, unless I stuck by ya
You’re the sunflower, you’re the sunflower

Posted in Notes, Support | Tagged , , | Comments Off on Notes on Fixing Ubuntu 18.04 VM not Booting

Today I Learned How to Setup Azure Pipelines CI

Our last EDMUG meetup was an excellent presentation about Azure DevOps. Azure DevOps reminds me of GitLab where it is more then just continuous integration (CI). It includes issues tracking, repositories, and continuous delivery. All pretty standard stuff.

However, one thing did jump out at me. The fact it had build built in images you could use to run the build on. Build images with Visual Studio pre-installed. They also have macOS X Mojave! No need to create you own build runner, either VM or Docker, like you do with so many other CI tools.

I don’t think using a 3rd party build image is the answer for everything. There are cases where I would want more control over the build image but for the example code I use for presentations the default image is good enough. No need for me to create and maintain a build image.

For this example I’m going to use my Introduction to ORM for DBAs presentation code. It’s a good project to start with as the presentation code is simple and does not have many dependencies. I also want to get an automated build working before trying GitHub’s Dependabot to auto-magically update dependencies for the presentation code.

Once I created my Azure DevOps account I then created the Introduction to ORM for DBAs project. For this new project the first thing I do is disable all the features but the Pipelines as I only need CI, not Git, bug tracking, etc.

Next I created the Pipeline.

I was prompted where the code is stored. In my case it’s stored in GitHub. I also was prompted to give Azure Pipelines access to GitHub which I did.

Now need to pick the repository and allow Azure Pipelines to access that repository.

Pipelines now presents some default builds. I picked the ASP .NET Core.

Then generates a reasonable default build file. This file will be stored in the root of you project with the name azure-pipelines.yml.

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core

trigger:
- master

pool:
  vmImage: 'windows-latest'

variables:
  solution: '**/*.sln'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'

steps:
- task: NuGetToolInstaller@0

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'

- task: VSBuild@1
  inputs:
    solution: '$(solution)'
    msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"'
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

- task: VSTest@2
  inputs:
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

Overall I like the defaults that where picked. Let’s examine this scrip in more details and see what it is doing. First it triggers a build on any changes to the master branch. I wonder if it will also build on pull requests?

trigger:
- master

Next it lists the image the build will be preformed on. The ‘windows-latest‘ is VS 2019 on Windows Server 2019. Works for me.

pool:
  vmImage: 'windows-latest'

Next it defines some variables to use in the actual build. I like the fact it finds all the solution files as the example project has 10 separate solutions for each of the steps.

variables:
  solution: '**/*.sln'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'

Next are the actual build steps with the first being install the NuGet packages. Nothing special here.

- task: NuGetToolInstaller@0

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'

After that is the step to compile. In this case compile is done using Visual Studio. I wonder why Visual Studio was picked instead of just building using .NET Core command line? Maybe so it can produce the deploy packages?

I think I can remove building the deploy packages as this build is just for an example and will never be released. For now let’s just leave it and see what happens.

- task: VSBuild@1
  inputs:
    solution: '$(solution)'
    msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"'
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

The final step is running the tests. I don’t have any tests for my example so I’ll be removing this step but for now just leave it and see what happens with the first build.

- task: VSTest@2
  inputs:
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

Let’s try to build with the default file and see what happens.

That is no good. Digging into the error message it appears that my application uses .NET Core 2.0 but it’s not installed on the image. We can fix this by installing .NET Core 2.0 using the DotNetCoreInstaller command as our first step.

When specifying the version of .NET Core it wants the SDK version, not the public .NET Core version. In my example I’m using the out of support .NET Core 2.0 but I would like to install the latest version of it. Using this handy chart I can see it’s SDK version 2.1.202.

Let me try the build now and see what happens.

That looks better. The final couple steps is to remove the build steps we don’t need, such as tests, and also don’t bother creating the deployment packages. I also changed the image to windows-2019 instead of latest which should prevent the build from magically failing if latest changes to VS 2020 or a different version of Windows. The final build script now looks like:

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core

trigger:
- master

pool:
  vmImage: 'windows-2019'

variables:
  solution: '**/*.sln'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'

steps:
# Install .NET 2.0.9.  Not supported anymore but
# is what we want to upgrade to too to fix the security
# issues.  After the build is working we will upgrade
# to a support .NET version.
- task: DotNetCoreInstaller@0
  inputs:
    version: '2.1.202'

- task: NuGetToolInstaller@0

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'

- task: VSBuild@1
  inputs:
    solution: '$(solution)'
    platform: '$(buildPlatform)'
    configuration: '$(buildConfiguration)'

When this build runs there won’t be any warnings for the tests because they have been removed.

That was relatively painless. If you are curious you can find the public pipeline build here.

P.S. – For some reason I find Angels and Airwaves great programming music for getting into the zone and their latest song, Rebel Girl, is no exception.

D-d-do you wanna go back to where we started?
Back before we were broken hearted?

Posted in Today I Learned | Tagged , | Comments Off on Today I Learned How to Setup Azure Pipelines CI