Today I Learned How to Create a Key Pair Using PuTTY

I recently had to generate a private/public key pair to access a Git repository.  While I’ve done this several times before I never do it enough to remember all the steps so this time I wrote it down.

Since my primary workstation runs Windows I use PuTTY to generate the keys.  If you thought PuTTY was just a SSH client then you are not alone (e.g. I used to think that too).  PuTTY’s unofficial tag line should be:

PuTTY.  It’s more then a just a SSH client.

Once you have Putty installed run the PuTTYGen application.  Make sure the type of key to generate is RSA and it’s 2048 bits then click the Generate button.

Why RSA?  Because that is the type of key you want 99% of the time and works with most clients and services.  Same with the 2048 length.  You can generate a longer key, say 4096 for better security, but it might not work with some clients and/or services.  That said if your service uses a different key format then adjust the settings as needed.

PuTTYGen Empty Form

Wiggle your mouse when prompted and a few seconds later you should have a new key generated.

Now change the key comment so you remember what this key is for.  I also recommend protecting you key with a passpharse, basically a password.  This prevents someone from using your private key if they are able to get a hold of it.  Then click Save private key button and save the key to a secure place.

PuTTYGen Save Private Key

PuTTYGen Save Private Key Prompt

Remember this is your private key and if someone gets a hold of it they can pretend to be you.  Similar to someone knowing your password.  In my case I save it to an encrypted location.

You should also backup your new key to a secure location.  In my case my keys are backed up to an encrypted location as part of my nightly backup.

Most remote services, such as GitHub, will ask you for your public key which you can cut and paste.

PuTTYGen Public Key

GitHub Adding Public Key

Important: When using your key remember to only share the public part.  Never share your private key!

Now you are all excited to start using the service you uploaded your public key, such as cloning the Git repository.  Unfortunately you will get an error about the key not being valid, not found, or something similar.

On Windows you need to run the PuTTY Pageant application.  This application runs in the background and handles key authentication.  When you run it it will load in the windows Notification Area (on the far right, used to be called the System Tray).

Pageant In Notification Area

Open up Pageant and then click the Add Key button.  Then navigate to where your private key is stored and load it.

Pageant Add Key

Pageant Add Key Prompt

If you put a passpharse on your key, which you should do, you will get prompted for it.

Pageant Password Prompt

Now your key will appear in Pageant and be used by applications that need to do key authentication.  You won’t have to enter your passpharse again while Pageant is running.  In practice this means you usually only have to reenter your passphrase when you reboot your computer.

Pageant Key Added


That is all there is too it.  Enjoy using your new key pair.


P.S. – I couldn’t find any good songs about keys but keys are encryption and encryption is complicated math.  Tool is known for songs with unique time signatures (i.e. hard music math) in their songs.  Schism is an excellent example of this with a 6 1/2 over 8 time signature.

I’ve done the math enough to know the dangers of our second guessing
Doomed to crumble unless we grow, and strengthen our communication

Posted in Software Development, Today I Learned | Tagged , , | Comments Off on Today I Learned How to Create a Key Pair Using PuTTY

Today I Learned how to Install PostgreSQL in Ubuntu

For an upcoming project I’m thinking of using PostgreSQL.  I’ve heard lots of great things about PostgreSQL in the past but have been too scared lazy busy to try it.

What changed my mind was JetBrains DataGrip database client.  I’m sure there are other PostgreSQL clients but DataGrip is included in my JetBrains subscription so why not give it a try.   I’m a sucker for GUI database clients.  As a generalist it’s too hard to remember all the command lines for each individual database.  Plus it’s really hard to view more then a few rows or columns of data in the command line.

Anyway, let’s get to installing Postgresql in a Ubuntu development environment.  The initial installation instructions can be found here.  First lets add the PostgreSQL Apt Repository.  We do this so we can get he latest version of PostgreSQL and aren’t stuck with the Ubuntu version.

First create a file that will point to the PostgreSQL Apt Repository:

sudo nano /etc/apt/sources.list.d/pgdg.list

Then add the following to the file:

deb -pgdg main

Finally import the repository signing key:

wget --quiet -O - | sudo apt-key add -

Now do a apt update and you should see the PostgreSQL listed:

sudo apt update

Finally install it:

sudo apt install postgresql

You can check if PostgreSQL was installed correctly by trying to connect to it:

sudo su - postgres psql

Notice you had to switch to the postgres user before attempting to connect.  This is a new user that was created during the PostgreSQL installation and is the default user for new database installs.

By default PostgreSQL does not have a default password in Ubuntu.  You can only login via the above command and can’t connect using other methods, such as DataGrip.  To change the postgres default user password run the following command when logged into PostgreSQL:

ALTER USER postgres PASSWORD 'password';

Once that is done you can logout of PostgreSQL by typing “\q”.

Now lets try to connect to our localhost PostgreSQL install in DataGrip.  Run DataGrip then choose File–>Data Sources.  Then click PostgresSQL and you should see something similar to the below.

DataGrip PostgreSQL Data Source

DataGrip PostgreSQL Data Source Advanced Options

If prompted download the driver files.  Now lets try to create a connection by clicking the green plus sign in the top right and choosing PostgreSQL:

DataGrip PostgreSQL Create Datasource Menu

You should see something similar to the below:

DataGrip PostgreSQL Create Datasource Download Missing Drivers

If there is a message saying a driver is missing then download it.  If this is your first time installing PostgreSQL there will be no database so leave that field blank but fill in the username and password.  The username is “postgres” and the password if the one you created in the above ALTER statement.  Click Test Connection to make sure everything works.

Test Database Connection

When you close this form you might be prompted to store the password in your keyring.  You don’t have to but I like too so I don’t have to keep entering it.

Key Chain Prompt

Now you should be able to see the PostgreSQL database in DataGrip:

PostgreSQL Database in DataGrip


P.S. – Spotify listed Bored to Death by Blink 182 as my most played song of 2017.  The second most played song was Sober also by Blink 182.  I wonder what targeted ads I would get if that information feel into Google or Facebook’s hands.

Life is too short to last long

Posted in Code Examples, Software Development, Today I Learned | Tagged , , | Comments Off on Today I Learned how to Install PostgreSQL in Ubuntu

Blast from the Past: New Tools Require New Standards

I recently started a new contract that involves tools and software languages I normally don’t use.   I have to remember that .NET best practices don’t necessarily translate to PHP/Java.  I have to remember that New Tools Require New Standards (originally published on September 3rd, 2010).

“An old belief is like an old shoe.  We so value its comfort that we fail to notice the hole in it.”
Robert Brault

As developers we all have standards, even if they aren’t that well defined.  Of course I’m talking about technology standards but feel free to insert your social awareness and/or hygiene standard joke here.  Standards can range from the usual coding standards to the names you give your servers (e.g Lord of the Rings Characters) and everything in-between.

Having been a developer for one third of my life, I’ve developed quite a few standards of my own.  Having worked in Windows shops most of my career, most my standards are focused around those tools.

Ruby on RailsWhen I first tried Ruby on Rails, I was prepared for a new website architecture (e.g. MVC).  What I wasn’t prepared for was adopting the new coding standards that Ruby encouraged.  With my brain already overloaded with the new architecture, I found myself writing Ruby code as though it was C# code.  The biggest one I noticed was naming my database tables and fields in camel case format instead of the underscore format that Rails encouraged.

I know this sounds stupid now, but at the time, my poor overloaded brain wanted to keep using the camel case names even though the tool, Rails, didn’t encourage it.  I even went as far as to look up how to override the underscore names before I came to my senses.

Crushing your HeadIt’s easy to forget but the main reason for having standards is to “compensate for the strictly limited size of our skulls”.  As Steve McConnell says:

“The primary benefit of a coding standard is that it reduces the complexity burden associated with revisiting formatting, documentation, and naming decisions with every line of code you write. When you standardize such decisions, you free up mental resources that can be focused on more challenging aspects of the programming problem.”

Often standards arise  from the tools being used.  All tools come with their own standards from the creators and community at large.  In some cases, a standard is created to work around a limitation of the tools being used.  Just remember that when you switch to a new tool, such as when I tried Ruby on Rails, the old standards might not be applicable anymore.  Let me repeat that for emphasis:

When you switch tools, your existing standards will have to change.

This is a rule I am struggling to remember and I’ve only been a developer for one third of my life.  Now imagine you have been a developer for over half your life.  How hard is it to give up on your well worn standards when faced with a new tool?  Very hard, I think, based on this summarized experience I recently had:

  • Start developing application using Fluent NHibernate as it will be the company’s new standard.
  • Well into development, find out the company’s existing standards require all database access to go through stored procedures.  Brought to our attention by a tech lead who had been a DBA for over half his life.
  • Have several meetings and e-mails about the impact of re-writing the code to meet the standard and how the tool NHibernate doesn’t work well with their existing stored procedures standard.
  • The tech lead relents and allows us to use NHibernate as it was designed.

There is a much longer story but the important part is the tech lead realized that if you are adopting a new tool at your company, your existing standards will have to change.  Remember that the standards he had helped reduce his mental load.  Being a technical lead is enough work without having to learn a new set of standards.

My hope is that after I’ve been a developer half my life, I remember my own rule and am willing to adapt my entrenched standards to a new tool despite the pain it might cause my brain.

Posted in Blast from the Past, Software Development | Tagged , , , | Comments Off on Blast from the Past: New Tools Require New Standards

Introduction to Object-Relational Mapping for DBAs – Part 3

Why Developers Use ORMs (i.e. their Strengths)

This is the third and final part of a lighting talk I’m giving at the SQL Saturday Edmonton Speaker Idol Contest.   Imagine I’m actually speaking the words below and showing some of the images on slides and/or doing a demo.

If you don’t want to read Part 1 or Part 2 the basically give an example of a developer using ORM, panic over indexes, and then show more ORM examples.  I don’t think it will all fit in a 15 minute talk.

This is a rough draft so constructive feedback at is much appreciated.

The first obvious reason developers use a ORM is it lets them create an application knowing little to no SQL.  A developer without SQL training can use the ORM to help them generate the database and write queries for them.  The DDL and SQL queries created by the ORM is probably just as good or better then what a novice SQL developer could write.

ORMs also write a lot of the repetitive SQL for the developer.  You know, the SQL for find a certain piece of data like a user or inventory item.  The SQL to update that piece of data or create it if it does not exist.  The boring repetitive CRUD SQL statements.  Without an ORM developers used to have write ADO.NET code which looked like:

using (SqlConnection conn = new SqlConnection(connString))
  // Query to load all the information about a customer.
  // province.  Hopefully there are no typos in the SQL.
  // Use parameters so we don't have
  // SQL injection issues.
  var cmd = new SqlCommand("
    Select c.*, a.*, p.Abbreviation
    From Customers c 
    Inner Join on Addresses a
    Inner Join on Provinces p
    Where c.Id = @CustomerId"
    And a.AddressType = @HomeAddressType
  cmd.Parameters.AddWithValue("@CustomerId", customerId);
  cmd.Parameters.AddWithValue("@HomeAddressType", HOME_ADDRESS_TYPE);

  // The adapter to read the data from the database.
  var dataAdapater = new SqlDataAdapter(cmd, conn);
  // The dataset to fill with data.
  // I think in .NET Core 2 you can create a DataTable
  // instead of always using a DataSet.  If true that would
  // have been great 20 years ago.
  var dataSet = new DataSet();

  // Read the data and fill the dataset.
} // Connection to database is closed.

// Show the data in the type unsafe dataset. Hope 
// you don't have a typo in the column names and 
// you correctly handle database nulls (not shown). 
Console.WriteLine{$"Name: {drow["Name"]} 
  Street: {drow["StreetAddress"]} 
  City: {drow["City"]} 
  Province: {drow["Abbreviation"]});

Actually, I’ll tell you DBAs a secret.  Back in the ADO.NET days no developer ever wrote all the code in the first example.  Instead we usually created our own custom code library that handled all the work to fill a DataSet with data.  The problem is every development shop had it’s own custom data access library with their own features and/or bugs.  But still, it was ugly code to write and would often break if the underlying database changed even a bit.

Using Entity Framework, Microsoft’s ORM, the above code looks like:

using (var ctx = new MyDbContext)
  // No need to worry about typos.  If Customers
  // is misspelled then a compile error will occur.
  // Also no need to worry about SQL injection.
  var customerToView = ctx.Customers
    .Include(customer => customer.Address)
    .ThenInclude(address => address.Province)
    .Where(customer => customer.Id.Equals("Id"))
} // Connection to database is closed.

Console.WriteLine($"Name: {customerToView.Name} 
  Street: {customerToView.Address.StreetAddress} 
  City: {customerToView.Address.City} 
  Province: {customerToView.Address.Province.Abbreviation}); 

Wow, that is a lot less code and much easier to read.  The developer still needs to understand a bit of the underlying database structure but a lot of the grunt work is taken care of.

Finally the biggest strength of a ORM is abstraction of data access.  It lets developers focus on the business logic and GUI of the application.  Data access is this magic thing that just happens.

Actually that’s not 100% true.  Like all abstractions ORMs leak so the developer has to know something about the underlying database but for those brief few moments the developer has one less thing to think about.  As Sherlock Holmes put it so eloquently said about developers:

“I consider that a man’s developer’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it. Now the skillful workman developer is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that this little room has elastic walls and can distend to any extent. Depend upon it, there comes a time when for any addition of knowledge, you forget something, that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”

“But the Solar System Database!” [Watson] I protested.

“What the deuce is it to me?” he interrupted impatiently:  “you say that we go around the sun store data in a database.   If we went round the moon stored the data on a stone tablet it would not make a pennyworth of difference to me or to my work”.

Good developers are good at using abstractions.  They create abstractions in code through functions, classes. and layers such as the GUI layer, business layer, etc.  An abstraction lets them to temporarily “forget” about the rest of the application and focus on the part they are working on.

Why DBAs Dislike ORMs (i.e. their Weaknesses)

Not only do good developers lover abstractions but they also understand their weaknesses.  Bad developers don’t understand that abstractions can have weaknesses.  Remember Uncle Ben’s famous words:

With great power abstractions comes great responsibility.

The most common mistake beginner ORM developers make is too turn on lazy loading and then do a loop.  The famous n+1 select problem.

using (var ctx = new MyDbContext)
  // Load all the customers
  // Select * From Customers;
  customers = ctx.Customers.ToList();

  // Print each customers city.
  foreach (var customer in customers)

    // Select * From Addresses where CustomerId = ?

The above code will send one query to get all the customers then send a separate query to get the address information.  If would look like:

Select * From Customers;

Select * From Address Where CustomerId = 1
Select * From Address Where CustomerId = 2
Select * From Address Where CustomerId = 3
Select * From Address Where CustomerId = 4
Select * From Address Where CustomerId = 5

This works great in test when there are only a couple of records but grinds to a halt in production.  Any DBAs seen something similar?

Bad, or I guess naive is better word, developers like lazy loading because the data just “magically” appears.  Good developers turn off lazy loading and write their code to only send one query.

using (var ctx = new MyDbContext)
  // Get all the customers and their addresses
  // in one query.
  // Select * From Customers c
  // Inner Join Addresses a On c.Id = a.CustomerId
  var customers = ctx.Customers
    .Include(customer => customer.Address)

// Print each customers city.
foreach (var customer in customers) {

  // No query, data is already in memory.

Another common mistake when using ORMs is pulling back unnecessary data.  In the above example we only want the customer name and city but the generated query pulls back all the customer and address columns.  ORMs are great when you need all or most of the columns in a table but are not as useful when you only need one or two columns.

We can re-write the above to only bring back the columns we are interested in:

using (var ctx = new MyDbContext)
  // Get all the customers and their addresses
  // in one query.
  // Select c.Name, a.City From Customers c
  // Inner Join Addresses a On c.Id = a.CustomerId
  var customerCities = ctx.Customers
    .Include(customer => customer.Address)
    .Select(c => c.Name, c.Address.City)

// Print each customers city.  Notice the
// data is flattened.
foreach (var custCity in customerCities ) {

Another big weakness of ORMs is complex queries.  ORMs really don’t handle joining lots of tables together or trying to have a complicated where and/or grouping.  These are best handled by writing SQL.  Needless to say reports should never use an ORM.  Instead use SQL or better yet a tool that helps you create reports.

The final weakness of ORMs is they don’t work well with stored procedures and views.  This can really be a problem is you are a DBA that likes all data access to go through views or stored procedures.  ORMs can also be a problem if you us a lot functions but that is usually less of an issue.

It’s not that ORMs can’t handle views or stored procedures it’s just not their strength.  The ORM abstraction works best if tables can be mapped to classes and fields to properties.  I once saw a project that used an ORM but had to access all the data via stored procedures and it was not pretty.  It was like the data access code fell from the top of the ugly tree and hit every branch on the way down.

How DBAs and ORMs can Work Together

I recently heard a good quote from the Supergirl TV show which I think is appropriate:

GIF Team Work Makes the Dream Work

The quote probably didn’t originate from Supergirl but you get the idea.  Good teams have empathy for each other and hopefully this talk has increased your empathy for developers by showing you why they use ORMs.

Hopefully developers have empathy for you as well.  A good developer respects that the DBA is the guardian of the data.  It’s your job to make sure the data is accurate and readily available.  Developers often focus on how their application uses the data and forgets other applications might have different data access requirements.

Assuming you are working with a good developer keep the following in mind:

  1. Let the ORM generate schema changes but make sure to review them.  The developer might not realize that renaming the column caused the column to be dropped and recreated resulting in data loss.
  2. Let the ORM access the tables directly.  Only force access threw a  view or stored procedure if there is a good reason.
  3. Don’t be afraid to adapt the ORM’s naming schema.  An ORM will often assume things like ID is the primary key column, table names are plural, foreign keys are <tablename>ID, etc.  If you can’t adopt the ORM’s preferred naming scheme then at least be consistent in your naming.
  4. Don’t worry about being made redundant.  ORMs can do some of the work DBAs used to do but not all of it and as we pointed out above ORM generated SQL should still be reviewed.  Also ORMs can’t do the A part of DBA (i.e. they don’t backup your database, secure it, create RAID arrays, etc).
  5. If you work with bad developers try to educate them.  They might just be naive.  If you can’t educate the developers and management won’t help you don’t be afraid to move on.  Life is too short to…<fill in the blank>.

Remember “Teamwork makes the dream work”.

Now that all three parts are written it looks longer then 15 minutes.  To find out what gets cut, changed, and added you will need to attend the  SQL Saturday Edmonton Speaker Idol Contest.  Or just wait till I post the slides online.

As I said above, this is a rough draft so constructive feedback at is much appreciated.  Thank you.

Posted in Code Examples, Software Development | Tagged , , | Comments Off on Introduction to Object-Relational Mapping for DBAs – Part 3

Introduction to Object-Relational Mapping for DBAs – Part 2

This is part two of a lighting talk I’m giving at the SQL Saturday Edmonton Speaker Idol Contest.   Imagine I’m actually speaking the words below and showing some of the images on slides and/or doing a demo.  Code can be found here.

If you don’t want to read Part 1 it basically started the ORM example and ended in a panic over indexes.  

This is a rough draft so constructive feedback at is much appreciated.

Now that we have our initial panic out of our system lets get back to Bud.  The next thing he wants to do is link a logged in user to a player.  A player needs to have a name and be linked to the login.  Bud creates a model, really just a simple C# class, and puts in all the information he wants to store a player.

Player Model

Looks kind of like a database table.  The one question you might have is what is a ApplicationUser?  It turns out a ApplicationUser is another model in the project that was auto-created when we choose to have authentication in our application.  It’s the logged in user.

If we open that ApplicationUser class we see it does not define any fields.  It doesn’t define any fields in the child class but the parent class does.  I’m not going to explain inheritance here but rest assured the fields are defined as shown below.

Empty Application User Model

Application Model Parent With Fields

Because the relationship is one-to-one Bud does add a new property to the ApplicationUser model.

Application User Model Linked To Player

Now the application knows that ApplicationUser has a one-to-one relationship to Player.  One other thing Bud has to do is add his new class the DB context.  Entity Framework might find his new model but it’s best if it’s listed.

Player Model Added to DB Context

Now Bud can create a new migration for his new Player model.

Add-Migration CreatePlayerTable


Add Player Table Migration

This creates the <timestamp>_CreatePlayerTable file.  If we open it up we see it creates the Player table and also a foreign key relationship to the ApplicationUser which maps to the AspNetUsers table.

Player Migration Create Table

Player Migration Foreign Key To Application User

Now that the migration file is created Bud runs the migration to add the Players table to his database.



Update Database Add Player Table

Now if we look in the database we find the new Players table, notice it’s plural, and it has a foreign key to the AspNetUsers table.

Players Table In SQL Server

OK, enough of following Bud, we are running out of lightning time.  Lets talk about why Bud would want to use an ORM tool such as Entity Framework.

You can find part 1 here. and part 3 here.  You can find the code for this talk here.  As I said above, this is a rough draft so constructive feedback at is much appreciated.


Posted in Code Examples, Software Development | Tagged , , , , | Comments Off on Introduction to Object-Relational Mapping for DBAs – Part 2

Introduction to Object-Relational Mapping for DBAs – Part 1

This is the part one of a lighting talk I’m giving at the SQL Saturday Edmonton Speaker Idol Contest.   Imagine I’m actually speaking the words below and showing some of the images on slides and/or doing a demo.  Code can be found here.

This is a rough draft so constructive feedback at is much appreciated.

Skip ahead to Part 2 if don’t feel like reading all of part 1.

Developer Bud wants to create a new application to track the board games he and his friends play.  He wants a simple website where him and his buddies can login to update the results from the board games they have played.  Being a .NET Developer he creates a ASP.NET MVC Core application with individual authentication.

Creating New ASP .NET MVC

Creating New ASP .NET MVC Authentication

By default this new application uses Entity Framework which is a object-relational mapping (ORM) framework.  Since he picked to use authentication the default ASP.NET application has a migration file that defines the authentication tables.

Authentication Migration File Location

Authentication Migration File

Even if you don’t understand C# you can probably see the above is describing a database but is not the usual SQL DDL.  We will return to this file later, for now lets continue to follow Bud.

Next he compiles and runs the application to make sure it works.  It loads up and looks like the below.

App Running For First Time

Everything looks good but when he tries to create a new user he gets the following error:

Trying To Register

Apply Migrations Registration Error


Following the advice he runs the migration.  Actually before he runs the migration he changes the connection string to point to his SQL Server instance instead of the SQL Server local DB:

App Settings JSON Location

Connection String In App Settings


Then Bud runs the command to update the database:



Update Database Command

Update (Dec 5th, 2018): Added the Bud successfully runs the application section below and changed some text at the end or part 1.

Bud runs the application again and this time when he registers there are no errors and he is successfully registered.

Register Successful After Migration


Bud doesn’t care about the underlying database that was created.  Well, that is not true, he does care about it but the same way most of us care about our car engine.  We only care about our care engine if the car won’t start.  If the car gets us from point A to point B then we don’t really care about the engine.

Bud doesn’t do this but because we are omnipotent DBAs (are all DBAs omnipotent?) we will peek behind the curtains at the generated database.  And here it is, a new database with some authentication tables.

Buddies Game Tracker Tables Created

From a developer point of view this is great.  Bud didn’t have to write a single line of SQL.  He didn’t didn’t even have to open up SQL Server, the database was just magically created.

This is just the beginning.  Later Bud will create more tables in code to track what buddies have which played which games.  He will do this without writing DDL and access the data with little to no SQL.

I imagine from a DBAs point of view this is a bit strange.  Don’t you start a new application by creating the database ERD first?  How did Visual Studio create the database?  Is the created database any good?  What about the auto-generated CRUD SQL?  Wait, what about the indexes?

Won't Somebody Please Think of the Indexes

Continue to part two of the talk.  You can find the code for this talk here.  As I said above, this is a rough draft so constructive feedback at is much appreciated.



Posted in Code Examples, Software Development | Tagged , , , , , | Comments Off on Introduction to Object-Relational Mapping for DBAs – Part 1

XPlugins.iOS.BEMCheckBox 1.4.3 Released

I’m happy to announce the release of XPlugins.iOS.BEMCheckBox 1.4.3.  The main feature of this release is exposing the DidTapCheckBox event in the underlying BEMCheckBox.  You can find a full list of issues fixed in this release here.

The easiest way to get this update is via NuGet:

Install-Package SaturdayMP.XPlugins.iOS.BEMCheckBox -Version 1.4.3 
dotnet add package SaturdayMP.XPlugins.iOS.BEMCheckBox --version 1.4.3 

Subscribe to the DidTapCheckBox event as you would any other C# event:

checkbox.DidTapCheckBox += DidTapCheckBoxEvent;

The event handler looks like:

// Fired before the checkbox animation completes but after the internal
// checkbox settings are updated with the new check/unchecked status (i.e.
// On property is updated).
private void DidTapCheckBoxEvent(object sender, EventArgs eventArgs)
  Console.WriteLine("In BeforeCheckBoxClickedEvent which is DidTapCheckBox in BEMCheckBox.");

Remember XPlugins.iOS.BEMCheckBox is just an Xamarin wrapper for objective-c BEMCheckBox.  For all the features of the checkbox checkout the BEMCheckBox website.

Thanks, as always, to Boris-Em for creating the excellent BEMCheckBox.

Posted in Software Development | Tagged , , , , | Comments Off on XPlugins.iOS.BEMCheckBox 1.4.3 Released

Happy Holidays and all the Best in 2018!

Wrapped Christmas Presents

Gift wrapping skill is inversely correlated to software development skills.

This morning I made my last shopping trip before Christmas, at least I hope it’s my last trip.

Pro tip: the Costco near us officially opens at 10am but if you show up at 9:45ish they will let you in.  At least they let me in.  Then you can run, grab the salmon, and get out before the hoards descend.

Now the salmon is becoming gravlax and the last of the gifts are wrapped.  All that’s left is to write this blog post.  Actually there are a couple other Christmas chores to finish up but not many and then I can kick back and enjoy the holidays with friends and family.

Please note that kicking back means Saturday Morning Productions will be slow to respond to phone calls, e-mails, etc over the holidays.  We appreciate your understanding.

We hope that you have also wrapped up your Christmas chores and gifts and can enjoy this holiday long weekend.  We also hope you are looking forward to the grande adventures you will have in 2018.

Happy holidays and all the best in 2018!

P.S. – Don’t forget this is the season of sharing.  If you are in a position to do so share some of your good fortune with others.  Like cooking the turkey for your wife while they are at the food bank.


Posted in Fun | Tagged | Comments Off on Happy Holidays and all the Best in 2018!

Today I Learned How to Fix Illegal Characters in Path Error with TeamCity and RealmDB

When building a Xamarin application one step is to build the Android APK file.  This is a MsBuild step in TeamCity that looks like:

TeamCity Create APK Package Build Step

This step generates the following error:

[15:55:53][Source\SmallVictories\SmallVictories.csproj] CopyRealmWeaver
[15:55:53][CopyRealmWeaver] CopyRealmWeaver
[15:55:53][CopyRealmWeaver] Copy
[15:55:53][Copy] Creating directory "*Undefined*Tools".
[15:55:53][Copy] C:\BuildAgent\temp\buildTmp\.nuget\packages\realm.database\2.1.0\build\Realm.Database.targets(28, 5): error MSB3021: Unable to copy file "C:\BuildAgent\temp\buildTmp\.nuget\packages\realm.database\2.1.0\build\..\tools\RealmWeaver.Fody.dll" to "*Undefined*Tools\RealmWeaver.Fody.dll". Illegal characters in path.
[15:55:53][Step 6/8] Error message is logged

Notice the “*Undefined*Tools” in the directory path.  To fix this step you need to add /p:SolutionDir=”/” to the command line.  So now the build step looks like:

TeamCity Create APK Package Build Step Fixed

You can find more about the bug in this GitHub issue and more about the fix in this blog post.


Posted in Software Development, Today I Learned | Tagged , , | Comments Off on Today I Learned How to Fix Illegal Characters in Path Error with TeamCity and RealmDB

Overlapping Segments Create Space Time Paradoxes

This post is part of a larger discussion about temporal databases.  Hopefully it stands on it’s own but for more context see the Temporal Database Design page.  You can read the official Wikipedia definition but for our purposes a Temporal Database is a database where you can query for historical data using SQL.  This is work in progress and constructive feedback via e-mail, at, is most appreciated.

In the last Temporal Database post I introduced timelines.  In this post we will talk about the one rule you cannot break:

Timeline segments cannot overlap!

That’s it.  That is the one rule.  Breaking this rule leads to all sorts of problems when you try to actually implement timelines in a database.  Aside from creating problems at the database level it also creates theoretical headaches.  For example, if you have the following in your customer table:

Timelines - Overlapping Segments

What is Chronos’s name on December 20th?  Is it Chronos or Kronos?  That said it is possible for people to have two names at the same time.  We have nicknames, aliases, and the like.  If you need to create a database that supports multiple names, say a police database, you can have an Aliases table that hangs off the Offenders table.  For example:

Timelines - Aliases Table

Chronos has an offender record with the name but also has two timelines in the Alias table with his over names.  Notice that alias table has two timelines, not one segment with overlapping segments.

How you design your temporal database will depend on your business logic but you must remember that there is one rule you can’t break:

When this baby hits 88mph you are going to see some serious $#%!

Sorry, wrong timeline.  I meant:

Timeline segments cannot overlap!

Posted in Software Development | Tagged | Comments Off on Overlapping Segments Create Space Time Paradoxes