Dovetail blog posts by Tihomir Kit

Invalidating AWS Cloudfront cache using Octopus Deploy

AWS + Octopus Deploy

I recently wrote how we submitted our first Octopus Deploy template to their online library for deploying .Net web apps to AWS Elastic Beanstalk using Octopus Deploy.

This time we needed to automate AWS Cloudfront cache invalidation. Turns out there are a few different ways to achieve this. You can either do it from the AWS console, by making a REST request or by using the AWS CLI tool.

Since authenticating against the AWS REST API is a bit more complex than we feel is necessary for the purpose of using it within an Octopus Deploy step, we decided to go with the AWS CLI approach (it's much easier to authenticate).

One more GitHub pull request and one more Octopus Deploy step template in their library in hope it might find someone in need. :)

The PowerShell script that does the hard work in the background of the template is the following (just fill in the AWS configuration variables): 

# AWS credentials profile name (should be unique)
# Used to store your AWS credentials to: ~/.aws/
$CredentialsProfileName = ""

# AWS CLoudfront Region
$Region = ""

# AWS Cloudfront Distribution Id
$DistributionId = ""

# AWS Access Key
$AccessKey = ""

# AWS Secret Key
$SecretKey = ""

# Space-delimited list of paths to invalidate.
# For example: /index.html /images/*
$InvalidationPaths = ""


Write-Host "Setting up AWS profile environment"
aws configure set aws_access_key_id $AccessKey --profile $CredentialsProfileName
aws configure set aws_secret_access_key $SecretKey --profile $CredentialsProfileName
aws configure set default.region $Region --profile $CredentialsProfileName
aws configure set preview.cloudfront true --profile $CredentialsProfileName

Write-Host "Initiating AWS cloudfront invalidation of the following paths:"
Write-Host $InvalidationPaths
aws cloudfront create-invalidation --profile $CredentialsProfileName --distribution-id $DistributionId --paths $InvalidationPaths

Write-Host "Please note that it may take up to 15-20 minutes for AWS to complete the cloudfront cache invalidation"

The script uses profile setup for AWS credentials. If you don't want to use the profiles, you can just remove that bits from the script but then you might have to re-setup credentials for a different project every time.

Cheers.

 


Deploying .Net web apps to AWS Elastic Beanstalk using Octopus Deploy

AWS + Octopus Deploy

Happy New Year! Here is a small 2017 present from Dovetail to everyone.

We normally use Azure to host the apps we make. The whole build and deploy with a single click process using TeamCity and Octopus Deploy is in place and it's trivial for us to add new projects to this pipeline.

Recently however, one of our clients wanted to host the .Net web app we're building for them on Amazon Web Services (AWS) because that's where the rest of their infrastructure is. Naturally not too many people host their .Net apps on AWS because MS Azure feels like a more natural fit. This meant it was a bit harder to find a fast and easy way to automate the deployment process to AWS through Octopus Deploy.

Anyway, we found this kind of half-baked solution (thanks!) on GitHub, made a few modifications, wrapped it up in a nice Octopus step template and made a pull request to Octopus library. :)

The template got accepted and can now be obtained from their library.

We hope it might help someone else and save them some time in setting the whole thing up.

Also, here are some more resources about deploying .Net apps to AWS which we found interesting.

codeproject.com - AWS deployment with octopus deploy
AWS docs - awsdeploy.exe tool
Octopus discussions - AWS elastic beanstalk
Octopus discussions - Modifying machines in environments to support AWS autoscaling
Octopus discussions - AWS beanstalk deployment using octopus deploy


Using SQL geometry type for finding near and intersecting shapes to a lat-lng point

This short blog post will provide you with two SQL stored procedures which work with the SQL geometry data type to figure out how a lat/lng point correlates to spatial shapes on the DB level.

What this means is - you provide lat/lng and the DB returns all the shapes which this point intersects with or in the other case returns the nearest shapes to that point.

You could also use the geography type for this and the code should end up being only slightly more complex than this but we didn't have the need for that so we ended up using the geometry type.

We used these to identify company branches for a certain location on the map. The geometry data is stored in the SpatialData field (of geometry type), and we're returning the BranchId and BranchName but you should obviously modify that to your needs.

-- Finds intersecting branches
-- Accepts lat and lng
CREATE PROC SP_GET_INTERSECTING_BRANCHES
  @lat FLOAT,
  @lng FLOAT
AS
BEGIN
  DECLARE @point GEOMETRY
  SET @point = GEOMETRY::Point(@lng, @lat, 4326)
  SELECT BranchId, BranchName FROM [dbo].[Branch]
  WHERE @point.STIntersects(SpatialData) = 1
END
GO


-- Finds the nearest branches
-- Accepts lat, lng and the amount of matching rows to return
CREATE PROC SP_GET_CLOSEST_BRANCHES
  @lat FLOAT,
  @lng FLOAT,
  @amount INTEGER
AS
BEGIN
  DECLARE @point GEOMETRY
  SET @point = GEOMETRY::Point(@lng, @lat, 4326)
  SELECT TOP (@amount) BranchId, BranchName, @point.STDistance(SpatialData) AS Distance FROM [dbo].[Branch]
  ORDER BY @point.STDistance(SpatialData)
END
GO

Hope it helps. Cheers!


Custom JavaScript parser vs Jison - Our experience

 

We recently announced QuickDBD, a simple product we made for drawing database diagrams by typing. If you take a look at the QuickDBD app you'll see it converts source code into a diagram. What we needed to make this work was obviously a parser.

After a bit of research on how to approach this problem, we knew that we would have to use either an existing parser generator or build a custom parser ourselves. After narrowing the choices down a bit, PEG.js and Jison emerged as the two most popular JavaScript parser generators at the moment. Out of these two, Jison seemed to have slightly bigger community - a bit more GitHub followers, more StackOverflow questions and a slightly better documentation. It seemed like a better bet so we decided to spend a bit of time playing with it and to try to make it parse the QuickDBD syntax.

We managed to make it parse the first version of our syntax we had a few months back pretty fast. But since the language we came up with for QuickDBD is closer to a data description language than what most people would consider a programming language, we started hitting bumps in the road pretty quickly as well. We soon ended up having to handle multiple edge cases we weren't able to with just Jison and what that meant was overriding Jison behaviour and injecting custom bits of JavaScript into it.

That kind of felt pretty messy so we talked a bit about it and made a decision to go with our own custom JavaScript parser for several reasons:

  • we would have complete control over how the parser works
  • everyone here is very well versed in JS
  • Jison was new to everyone and there is a bit of a learning curve in being able to do stuff with it efficiently
  • it felt as if we were fighting Jison to make it work something it wasn't supposed to more than it felt it was this great tool that was would empower us to do things better and faster
  • a couple of times it was pretty hard to get information on how to do something with Jison so we had to fall back to reading it's source code to figure things out
  • it didn't feel like the right tool for the job

We however did pick up some ideas from trying it out and I believe it made the custom parser we came up with that much better. We wrote a parser that's fairly small, fast and easy to read, expand and fix - which is ultimately what we needed.

I still think Jison is a great tool but it just wasn't a very good fit four for our needs. If you're considering using it, perhaps try it out on a smaller subset of features of your language first and see how you like it before committing to it. You can always go back to writing something custom after you tried it out.

I also recommend you read this very good parser generators vs custom parsers SO thread with pros and cons for both sides.

Hope this helped!


Hello QuickDBD!

Quick Database Diagrams

For the last couple of months we've been working on a side-project here in Dovetail. Martin and Trevor wanted a tool to quickly draw/prototype database diagrams by typing. So, we're happy to announce QuickDBD! We decided to wrap it in a shiny design and make it a little product which we hope others will find useful as well. In time, if there is enough demand we'll expand the feature set. If you have any ideas or suggestions, please let us know on our roadmap Trello board.

In the process of making QuickDBD, a lot of cool, interesting technologies were used and no programming languages were harmed! We used things such as AngularJS, Typescript, JointJS (for diagram rendering - awesome library!), Karma and Jasmine (for testing), Angular Material and SASS on the front-end, .Net WebAPI, xUnit and MS SQL on the back-end and we automated our build-test-deploy pipeline with bower, gulp, TeamCity, Octopus Deploy and Azure. A very interesting journey!

We hope you like QuickDBD same as we do. If you have any feedback, please let us know!


Integrating Karma code coverage with TeamCity

To unit test our Angular apps we use Karma test runner and Jasmine testing framework. Locally we run these tests using a gulp script that takes care of the whole app building process. To ensure nothing is broken before publishing the app to production we run our tests during the continuous integration process using TeamCity.

This post expects you to have a gulp testing process already in place and it won't cover that part. It also expects you to have a working TeamCity setup in place. The post will only help you integrate Karma with TeamCity as an additional build step so you would get something that looks like this in your TeamCity.

Number of passed/failed tests:

The code coverage tab:

There are a few requirements before we can make this work. To help you better understand our setup, here is a sample project structure that we have:

The first thing to do is ensure you have the following npm packages installed and that they are saved in your package.json file:

"karma": "^0.13.22",
"karma-chrome-launcher": "^1.0.1",
"karma-coverage": "^1.1.1",
"karma-jasmine": "^1.0.2",
"karma-phantomjs-launcher": "^1.0.0",
"karma-teamcity-reporter": "^1.0.0",

Next ensure that you have the following set up in your karma.conf.js:

  • "coverage" and "teamcity" in the reporters list
  • "PhantomJS" in your browsers list
  • singleRun set to true
  • our coverageReporter configuration looks like this (this part is pretty important):
coverageReporter: {
  dir: 'coverage',
  reporters: [
    { type: 'html', subdir: 'html' }
  ]
}
  • set the preprocessors configuration to something like this:
'path/to/code/you/want/to/tests/*': ["coverage"]
  • NOTE: we do not have the plugins property set up
  • the rest of options are pretty much standard - add/remove what you need

Now that this is all set up, go to your TeamCity. This is essentially how our client-side build process looks like:

The step that is the main interest of this post is the "Run Karma Tests" step. Here is how we have it set up (create a Command Line step):

This is a slightly modified version of what Karma documentation recommends. The difference is that we are forcing the use of local Karma module and we specify the configuration as a command line param like this:

node node_modules/karma/bin/karma start karma.conf.js

The last piece of the puzzle is setting up the coverage artifact. Go to the General Configuration Settings of your project in TeamCity and add an additional coverage artifact path (the second line):

The important bit (it's simply where our coverage html files are located):

Project.WebApp/coverage/html/** => coverage.zip

Go back and see how we have the coverage/html folder in our project structure. It is set up by coverageReporter karma.conf.js property. This artifact path will take all the files from the coverage/html folder and will compress it into a coverage.zip archive. After the build process finishes, TeamCity will (if it's is able to find the coverage.zip archive inside the artifacts folder) automatically import it as code coverage for the project and you will be able to navigate to the "Code Coverage" tab for that specific build. If you have any tests that don't pass, this will also fail the whole step, stop the build and prevent it from ending up in production.

Hope this helps. Cheers! :)


Migrating from InfluxDb v0.8.7 to v0.9.6

undefined

In one of the applications we're working on we recently had to make a move from InfluxDb v0.8.7 to v0.9.6. Because the official migration paths didn't work for us (DB upgrades would either lose data or not finish at all) we had to develop a small c#/.net app that would reliably execute the migration for us.

We successfully migrated around 4GB of data with it and are quite happy with how it went. It did take quite a bit of time but all the data is safe and usable.

The app also lets you specify backfills (rollups) to be created once all the base data is migrated. 

Today we're open-sourcing this migration tool in hope it might help someone else make the move as well. :)


Open-sourcing InfluxData.Net library

We've been using the InfluxDb time-series database for almost a year now on one of our projects and it works pretty nicely even though it still didn't hit the v1.0 mark. 

We started our InfluxDb journey with v0.8.7 and thus far even though we wanted to, there wasn't an easy way to migrate to v0.9.x. We however came to a point where we needed to upgrade in order to implement new features required by the project.

The first step to take was to see if there are any .Net libraries that supported InfluxDb v0.9 and the one we've been using from start seemed to be the best one. The problem was it wasn't updated for quite a while and it didn't support the latest InfluxDb versions.

So, I decided to fork it, refactor it and make it work with the latest InfluxDb. The code can be found on GitHub, it's under MIT licence and there is also a NuGet package on nuget.org. The integration tests are all working again and the docs have been updated. Rejoice!

In the future, my plan for the library is to support the rest of the TICK stack layers as well as their API's get more stable.

We're also planning on open-sourcing the migration tool that we developed and used to migrate the data from v0.8 to v0.9 in hope it might help someone else as well. :)


Failing Azure Recovery Service VM restore jobs and how to resurrect your backups

Recently we've had a bit of a crisis situation when one of our Azure VM's decided to lose a bunch of data. Fortunately backup jobs were set up through Azure's Recovery Services and I've already used that a few times to restore or make copies of various VM's without any problems. A few clicks and you're ready to go. This was supposed to be an easy 20 minute task but this time was different.

For whatever reason instead of getting a restored VM, I started getting the following restore job fail message:

Restore failed with an internal error.
Please retry the operation in a few minutes. If the problem persists, contact Microsoft Support.

Not very descriptive, and not really helpful. :/

The data transfer part of the job would succeed each time, but the "Create the Restored VM" kept on failing. I tried using different restore points from a day, week, or even a month back, but it made no difference. It all came to a point where we had to submit a ticket to Microsoft to resolve this issue.

The two possible solutions that were presented to us were:

  • either restore the VM under a new Azure Cloud Service - this worked fine, but wasn't really what we wanted to do (you don't really want to pile up additional Cloud Services just to do a simple restore, it makes no sense and leaves a messy infrastructure behind)
  • restore the VM through Azure Powershell - this was a bit trickier, but it worked great in the end

So after a bit of research I realized that the Azure Web Portal doesn't actually use the exact same back-end infrastructure as Powershell which is a bit weird and should probably be emphasized a bit more throughout Azure documentation.

Microsoft support told us to follow this documentation page to restore the VM using powershell, but the tutorial wasn't without its kinks either.

Perhaps this got resolved by now but for the whole thing to work, I first needed Azure Powershell v1. That ended up being a bit of a pain because it required the regular Powershell v3 where Windows 8.1 comes with Powershell v4 and the downgrade was another mission impossible... In the end I somehow managed to resolve this issue by installing the latest Azure Powershell using Microsoft Web Platform Installer. That gave me the much needed Azure tooling for Powershell. Yay!

Now to code - these few Powershell commands will extract the VHD from the backup:

> Select-AzureRmSubscription -SubscriptionName YourSubscription
> $backupvault = Get-AzureRmBackupVault -Name "YourBackupVault"
> $backupitem = Get-AzureRMBackupContainer -Vault $backupvault -Type AzureVM -name "YourVmName" | Get-AzureRMBackupItem
> $rp = Get-AzureRMBackupRecoveryPoint -Item $backupitem
# change the $rp number to select the recovery point you want here
> $restorejob = Restore-AzureRMBackupItem -StorageAccountName "yourStorageAccountName" -RecoveryPoint $rp[6]
> $restorejob = Get-AzureRMBackupJob -Job $restorejob
> $details = Get-AzureRMBackupJobDetails -Job $restorejob

From here I finished the process using the Azure Portal as the rest of the process / powershell commands from the documentation seemed to be out of date and didn't work.

To complete the process, you should go to the Azure Portal, to VM section, and then select the "Disks" tab. From there you'll be able to create an unprovisioned disk which you will then use to create a new VM from. Afterwards click the + icon in the bottom left corner, and choose "create a VM - from gallery", you will see an option to use your newly created disk. Finish the setup and you're good to go. :)

Hope this helps you if you find yourself in a similar situation. Cheers!


Should you use Angular2 in production?

Recently we had a small debate about Angular2 and what the benefits and pitfalls of using it for a project right now would be. In the end Fabrizio and I came up with a short list of pros and cons.

Pros

  • Typescript will force developers to write better code.
  • Angular2 should be faster than the Angular1.
  • It is best not to invest in a framework if it is to be shortly discontinued.
  • You will be one of the Angular2 pioneers.
  • The development process will be very strict and it will require a good knowledge of the project.
  • Localization of application will be easier with the implementation of the shadowDom.
  • Debugging templates will be easier because they will raise runtime exceptions.
  • The code needs to be built before deployment. This will slow down the process but will spot code errors and typos.
  • Gaps between browsers implementations of new standards will be handled by specific libraries (Angular2 will emulate the shadowDom).

Cons

  • It is in an Alpha version. It means that the inner structure could (and it will) be subject of big breaking changes.
  • The API is not stable yet (breaking changes will be introduced).
  • Not all features are implemented yet (you will have to reinvent the missing parts and then once they get officially implemented, your custom workarounds will be obsolete and probably not as optimized and not as good as Angular2).
  • Not enough documentation. Also not enough code examples on the web, so much work will be pioneering.
  • Ecosystem is not there yet (not all libraries and tools are ported yet). For example: there is an alpha of Bootstrap; Foundation isn’t there yet; the router is not ready yet. The lack of availability of convenient libraries may mean more development.
  • Both versions will remain on the market and both of them will be actively developed.
  • The team is still thinking about "how to do things for Angular2".

So that's what we came up with. Of course there is no ultimate answer and surely Angular2 will be a good tool once it's ready. But before that happens, we think it's probably best not to use it for serious projects that need to go into production.

Speaking of framework readiness here is an appropriate comic from Commitstrip that hits the spot.

undefined


  • 1
  • 2