Realex Payments recently announced that it will be ending support for TLS Version 1.0 and 1.1 and began sending emails out letting their customers know of this change.
I have written this guide which should help people use security best practices on an IIS Windows server, which should address the new Realex security requirements.
Please note that in order for the changes to take effect you will need to restart your server.
This guide is only for servers running Windows and IIS.
Your Web Applications (Web Sites) will also need to have an SSL Cert.
Step 1: Download IIS Crypto 2.0
Go to Nartac and download IISCrypto.exe to your server.
Step 2: Run IIS Crypto 2.0
Run the executable you just downloaded. It is a portable program so it doesn't install anything. The program should display a screen similar to the one shown here.
Step 3: Click the Best Practices Button
On the screen click the "Best Practices" button on the bottom left or select the options you want. The window should then look like the screen below. Once you are happy with the selected tick boxes. Click the "Apply" button.
Step 3: Restart your Server
After you clicked "Apply" you will need to reboot your server. IIS Crypto will tell you to do this (it will not reboot the server for you).
Step 4: Check your server at Qualys SSL Labs
Once your server and IIS has come back online you will need to check the rating of your server. Enter the URL of the site or the IP address of the server and have Qualys SSL test your server. You will want to get at least an A rating for server. If you do not get an A rating you will need to review your server's security settings and re-run the SSL Report.
I hope this helps anyone who may want to update their servers security.
Lately I learned of a new tool which gets and sets an SSL certificate automatically for you and renews itself every 3 months - Let's Encrypt.
I was eager to try out this new service on one of our Umbraco sites however, there was an issue when I tried to run the program.
Let's Encrypt adds a folder called ".well-known" to the root of the site. It then uses this folder to verify the site and issue an SSL certificate. When you attempt to do this using an Umbraco site you will be given an error which says something along the lines of "Let's Encrypt cannot access this folder".
In order to get the SSL issued and installed you will need to modify the WebConfig of your Umbraco site like below.
<add key="umbracoReservedPaths" value="~/umbraco,~/install/" />
<add key="umbracoReservedPaths" value="~/umbraco,~/install/,~/.well-known/" />
Re-run the Let's Encrypt program and the SSL certificate should then be issued and installed for your Umbraco site.
Note: That this will also work for Azure hosted Umbraco sites using the KUDU Let's Encrypt site extension.
I recently wrote how we submitted our first Octopus Deploy template to their online library for deploying .Net web apps to AWS Elastic Beanstalk using Octopus Deploy.
This time we needed to automate AWS Cloudfront cache invalidation. Turns out there are a few different ways to achieve this. You can either do it from the AWS console, by making a REST request or by using the AWS CLI tool.
Since authenticating against the AWS REST API is a bit more complex than we feel is necessary for the purpose of using it within an Octopus Deploy step, we decided to go with the AWS CLI approach (it's much easier to authenticate).
One more GitHub pull request and one more Octopus Deploy step template in their library in hope it might find someone in need. :)
The PowerShell script that does the hard work in the background of the template is the following (just fill in the AWS configuration variables):
$CredentialsProfileName = ""
$Region = ""
$DistributionId = ""
$AccessKey = ""
$SecretKey = ""
$InvalidationPaths = ""
Write-Host "Setting up AWS profile environment"
aws configure set aws_access_key_id $AccessKey --profile $CredentialsProfileName
aws configure set aws_secret_access_key $SecretKey --profile $CredentialsProfileName
aws configure set default.region $Region --profile $CredentialsProfileName
aws configure set preview.cloudfront true --profile $CredentialsProfileName
Write-Host "Initiating AWS cloudfront invalidation of the following paths:"
aws cloudfront create-invalidation --profile $CredentialsProfileName --distribution-id $DistributionId --paths $InvalidationPaths
Write-Host "Please note that it may take up to 15-20 minutes for AWS to complete the cloudfront cache invalidation"
The script uses profile setup for AWS credentials. If you don't want to use the profiles, you can just remove that bits from the script but then you might have to re-setup credentials for a different project every time.
Happy New Year! Here is a small 2017 present from Dovetail to everyone.
We normally use Azure to host the apps we make. The whole build and deploy with a single click process using TeamCity and Octopus Deploy is in place and it's trivial for us to add new projects to this pipeline.
Recently however, one of our clients wanted to host the .Net web app we're building for them on Amazon Web Services (AWS) because that's where the rest of their infrastructure is. Naturally not too many people host their .Net apps on AWS because MS Azure feels like a more natural fit. This meant it was a bit harder to find a fast and easy way to automate the deployment process to AWS through Octopus Deploy.
Anyway, we found this kind of half-baked solution (thanks!) on GitHub, made a few modifications, wrapped it up in a nice Octopus step template and made a pull request to Octopus library. :)
The template got accepted and can now be obtained from their library.
We hope it might help someone else and save them some time in setting the whole thing up.
Also, here are some more resources about deploying .Net apps to AWS which we found interesting.
codeproject.com - AWS deployment with octopus deploy
AWS docs - awsdeploy.exe tool
Octopus discussions - AWS elastic beanstalk
Octopus discussions - Modifying machines in environments to support AWS autoscaling
Octopus discussions - AWS beanstalk deployment using octopus deploy
This short blog post will provide you with two SQL stored procedures which work with the SQL geometry data type to figure out how a lat/lng point correlates to spatial shapes on the DB level.
What this means is - you provide lat/lng and the DB returns all the shapes which this point intersects with or in the other case returns the nearest shapes to that point.
You could also use the geography type for this and the code should end up being only slightly more complex than this but we didn't have the need for that so we ended up using the geometry type.
We used these to identify company branches for a certain location on the map. The geometry data is stored in the SpatialData field (of geometry type), and we're returning the BranchId and BranchName but you should obviously modify that to your needs.
CREATE PROC SP_GET_INTERSECTING_BRANCHES
DECLARE @point GEOMETRY
SET @point = GEOMETRY::Point(@lng, @lat, 4326)
SELECT BranchId, BranchName FROM [dbo].[Branch]
WHERE @point.STIntersects(SpatialData) = 1
CREATE PROC SP_GET_CLOSEST_BRANCHES
DECLARE @point GEOMETRY
SET @point = GEOMETRY::Point(@lng, @lat, 4326)
SELECT TOP (@amount) BranchId, BranchName, @point.STDistance(SpatialData) AS Distance FROM [dbo].[Branch]
ORDER BY @point.STDistance(SpatialData)
Hope it helps. Cheers!
Congratulations are in order as the following Dovetailers passed their Microsoft Certification exams.
Tomás and Murilo passed 70-461: Querying Microsoft SQL Server
Fabrizio and Kit passed 70-483: Programming in C#
John and Mossy passed 70-532: Developing Microsoft Azure Solutions
Progression is one of Dovetail's core values and we promote constant learning and improvement. In the fast-moving technical sector, no one can afford to sit still and we are already planning next year's Progression Goals.
This month myself and the other Dovetailers took some Microsoft Certification exams.
I took the exam 70-461: Querying Microsoft SQL Server 2012/2014.
With that in mind I thought I might share some thoughts on my exam preparation and on some of the resources I used in preparing for the exam. I must say that I was not very proficient with SQL before studying for this exam but with the right amount of preparation and study I passed and earned my certificate.
Below is a list of what I did while preparing for the exam
- Studied from different resources books, video courses and practice exams.
- Find which kind of study material suits you more.
- Study over a period of 4 months.
- At least 30 minutes study every day.
- Try and study in the morning. I found it hard to study in the evenings.
- Take practice exams.
- Study the exam objectives.
- Once you feel ready, take the exam.
Below is a list of Study material I used and my thoughts on them
The book I studied with was "Training Kit (Exam 70-461): Querying Microsoft SQL Server 2012 by Dejan Sarka, Itzik Ben-Gan, and Ron Talmage".
This is the official book for the exam. It is a detailed book and covers all the exam objectives. It also covers more than the exam objectives all of which was interesting and will prepare me for the next SQL exam. The book contains around 700 pages.
The book comes with a free practice test but this was not as good as the practices exams provided by Measure Up and Transcender.
Video Courses (70-461)
Pluralsight : The course was a bit too short and the instructor does not go through the topics thoroughly . I felt it best as an introduction to the exam.
Joes 2 Pros: Very good material, the instructor goes into every topic in detail and provides labs for practicing as well. The website is kinda clunky but the videos are good.
CBT Nuggets: Alternative to Joes 2 Pros, it does not go into the same depth as them but a it does cover a lot of the topics. Provides a lab for you to practice with. The free trial only lasts 7 days.
YouTube: There is a SQL Server tutorial playlist, which covers more topics than whats on the 70-461 but a good free resource.
Measure Up: The interface did not properly format the SQL, so it can be quite difficult to read. A part from that the exams were useful.
Transcender: I really like Transcender, the interface is good and they also provide flashcards which were helpful when trying to understand an exam topic when needed. I found this to be a really useful tool when studying. The exams were as good as Measure Up and the type of questions asked were of a similar structure to the real exam.
I hope you find this blog post helpful in preparing for your 70-461 exam.
Around 4pm yesterday one of our clients began receiving error notifications from Worldpay.
The message was:
Our systems have detected that your callback has failed.
This callback failure means we were unable to pass information
to your server about the following transaction:
Transaction ID: 1111111111
Cart ID: 1111111111111
Installation ID: 1111111
Error reported: Callback to: https://example.com: failed CAUSED BY Remote host closed connection during handshake
Server Reference: 11111-11-1111:callbackFailureEmail-11111:11111111-11-11
Also, if you usually return a response page for us to display to the Shopper within the time allowed (1 minute), this will not have been displayed.
Googling the error "Remote host closed connection during handshake" shows that the message relates to the requesting service's handling of SSL certificates.
We hadn't changed the client's SSL cert for over a year. We had not deployed any recent software updates for the client, and we could see that multiple other payment processors, used by this system, were connecting to our server without issue. There were no errors in our server's Event Log or in the app's Logentries records.
We contacted Worldpay support, who were very helpful. They told us that SSL certs are cached on their systems, and can be cached for a long time (i.e. over a year). They also said their systems can't handle SNI.
So what seems to have happened was that Worldpay's certificate cache was refreshed yesterday around 4pm. Our client's year-old certificate, which uses SNI, was loaded by Worldpay, and all subsequent connections from Worldpay failed.
Options to fix this include (a) get a new non-SNI certificate and (b) change the callback URL to use HTTP.
Hopefully this post will assist if someone else experiences this issue.
Last week Dovetail exhibited at the IoT World Conference held in the Dublin Convention Centre.
It was a really interesting event with over 200 speakers and 150 exhibitors. The startup area was particularly interesting with a wide variety of new businesses showing their wares. With my background in mechanical engineering I was particularly taken with this strain gauge built with nanoparticles.
At the Dovetail stand we demonstrated the system we developed of Novaerus, which drew a lot of attention.
Despite how this picture looks, we didn't actually have a Martin Wallace mannequin. This was the real article, I think he just froze up for a second :)
At the start of every project I place a brief but concerted focus on what to call the system under development.
Why is a good name important?
- It promotes clear communication between stakeholders, and clarity is a Dovetail core value. I worry when a generic term like “the system” is used in a meeting - inevitably somebody is left wondering “Which system exactly?”
- It gives the nascent software system its own identity. This helps stakeholders to engage with the project even though it may still be abstract to them. They can visualise the solution better when it has a name, leading to more creativity and thorough analysis.
So what makes a good name? Here are my suggestions:
- It should be unique rather than generic. If it stands out a little it helps give the new system its own personality.
- It should be a single word, so short that it never occurs to anyone to abbreviate it in speech or writing. This promotes consistent use by being the easiest way to refer to the new system.
- Its pronunciation should be unambiguous. This removes the fear of saying it "wrong", another barrier to universal adoption.
- Don't try to describe the project in its name. You will probably end up with something cumbersome. The name will also be prone to irrelevance as the project grows and evolves.
- The meaning of the word really doesn't matter, so don’t sweat about it too much. Of course it can be a nifty acronym or something related to the project, but it can also just be a word that sounds good. Like a child, the project will grow into its name, everyone will get used to it, and eventually you won't be able to imagine any other name sounding right.
- Don't worry about the permanence of the name. You’re just choosing something for internal use by stakeholders. If the system is launched to a wider audience you can give it a public-facing name at that time, and it will probably be better than anything you think up at this stage.
- Do get buy-in from key stakeholders. Your goal is universal adoption: people find this surprisingly easy when their boss loves the name!
Here are some good examples of actual Dovetail projects:
HARPS was a neat acronym we laboured over when the project started years ago, but nobody remembers what it means now. Hermes is a project for a sports body, so we named it after the Greek god associated with sport. Athena was a seemingly random suggestion by a client after I shared my guidelines above.
As for the last two: when we’re stuck we just pick a bird’s name. It works every time, showing how unimportant the actual word is!