In one of the applications we're working on we recently had to make a move from InfluxDb v0.8.7 to v0.9.6. Because the official migration paths didn't work for us (DB upgrades would either lose data or not finish at all) we had to develop a small c#/.net app that would reliably execute the migration for us.
We successfully migrated around 4GB of data with it and are quite happy with how it went. It did take quite a bit of time but all the data is safe and usable.
The app also lets you specify backfills (rollups) to be created once all the base data is migrated.
Today we're open-sourcing this migration tool in hope it might help someone else make the move as well. :)
We've been using the InfluxDb time-series database for almost a year now on one of our projects and it works pretty nicely even though it still didn't hit the v1.0 mark.
We started our InfluxDb journey with v0.8.7 and thus far even though we wanted to, there wasn't an easy way to migrate to v0.9.x. We however came to a point where we needed to upgrade in order to implement new features required by the project.
The first step to take was to see if there are any .Net libraries that supported InfluxDb v0.9 and the one we've been using from start seemed to be the best one. The problem was it wasn't updated for quite a while and it didn't support the latest InfluxDb versions.
So, I decided to fork it, refactor it and make it work with the latest InfluxDb. The code can be found on GitHub, it's under MIT licence and there is also a NuGet package on nuget.org. The integration tests are all working again and the docs have been updated. Rejoice!
In the future, my plan for the library is to support the rest of the TICK stack layers as well as their API's get more stable.
We're also planning on open-sourcing the migration tool that we developed and used to migrate the data from v0.8 to v0.9 in hope it might help someone else as well. :)
Recently we've had a bit of a crisis situation when one of our Azure VM's decided to lose a bunch of data. Fortunately backup jobs were set up through Azure's Recovery Services and I've already used that a few times to restore or make copies of various VM's without any problems. A few clicks and you're ready to go. This was supposed to be an easy 20 minute task but this time was different.
For whatever reason instead of getting a restored VM, I started getting the following restore job fail message:
Restore failed with an internal error.
Please retry the operation in a few minutes. If the problem persists, contact Microsoft Support.
Not very descriptive, and not really helpful. :/
The data transfer part of the job would succeed each time, but the "Create the Restored VM" kept on failing. I tried using different restore points from a day, week, or even a month back, but it made no difference. It all came to a point where we had to submit a ticket to Microsoft to resolve this issue.
The two possible solutions that were presented to us were:
- either restore the VM under a new Azure Cloud Service - this worked fine, but wasn't really what we wanted to do (you don't really want to pile up additional Cloud Services just to do a simple restore, it makes no sense and leaves a messy infrastructure behind)
- restore the VM through Azure Powershell - this was a bit trickier, but it worked great in the end
So after a bit of research I realized that the Azure Web Portal doesn't actually use the exact same back-end infrastructure as Powershell which is a bit weird and should probably be emphasized a bit more throughout Azure documentation.
Microsoft support told us to follow this documentation page to restore the VM using powershell, but the tutorial wasn't without its kinks either.
Perhaps this got resolved by now but for the whole thing to work, I first needed Azure Powershell v1. That ended up being a bit of a pain because it required the regular Powershell v3 where Windows 8.1 comes with Powershell v4 and the downgrade was another mission impossible... In the end I somehow managed to resolve this issue by installing the latest Azure Powershell using Microsoft Web Platform Installer. That gave me the much needed Azure tooling for Powershell. Yay!
Now to code - these few Powershell commands will extract the VHD from the backup:
> Select-AzureRmSubscription -SubscriptionName YourSubscription
> $backupvault = Get-AzureRmBackupVault -Name "YourBackupVault"
> $backupitem = Get-AzureRMBackupContainer -Vault $backupvault -Type AzureVM -name "YourVmName" | Get-AzureRMBackupItem
> $rp = Get-AzureRMBackupRecoveryPoint -Item $backupitem
# change the $rp number to select the recovery point you want here
> $restorejob = Restore-AzureRMBackupItem -StorageAccountName "yourStorageAccountName" -RecoveryPoint $rp
> $restorejob = Get-AzureRMBackupJob -Job $restorejob
> $details = Get-AzureRMBackupJobDetails -Job $restorejob
From here I finished the process using the Azure Portal as the rest of the process / powershell commands from the documentation seemed to be out of date and didn't work.
To complete the process, you should go to the Azure Portal, to VM section, and then select the "Disks" tab. From there you'll be able to create an unprovisioned disk which you will then use to create a new VM from. Afterwards click the + icon in the bottom left corner, and choose "create a VM - from gallery", you will see an option to use your newly created disk. Finish the setup and you're good to go. :)
Hope this helps you if you find yourself in a similar situation. Cheers!
Until yesterday I was using a free version of Mindscape Web Workbench to handle my SASS files and compile them into CSS. Over time, however, as the CSS in projects became more complicated and the files became larger, Mindscape just wasn't quick enough. Upon the guidance of Kit, I decided to switch tools and move to Sassy studio.
Sassy Studio relies on Ruby (Sassy uses a Ruby library to compile the SASS), so you'll need to install that too. The order in which you install these tools does not matter. You might already have Ruby available on your machine depending on your skillset. My current Visual studio version is 2012, you might need to check your versions to find a compatible extension.
Install sassy studio
You can install it from https://visualstudiogallery.msdn.microsoft.com/85fa99a6-e4c6-4a1c-9f00-e6a8129b6f4d.
You can install that from http://rubyinstaller.org/downloads/.
Once you have both installed, boot up Visual studio and check your settings (The ruby path is important, the rest is up to you). Here are mine:
That's it, done. My SASS is compliling much quicker, and I don't need to wait as long to refresh the page when I'm "Making the logo bigger". Kit tells me about faster tool using C, which uses time travel to compile your CSS, but for now I'm happy with Sassy Studio.
Recently we had a small debate about Angular2 and what the benefits and pitfalls of using it for a project right now would be. In the end Fabrizio and I came up with a short list of pros and cons.
- Typescript will force developers to write better code.
- Angular2 should be faster than the Angular1.
- It is best not to invest in a framework if it is to be shortly discontinued.
- You will be one of the Angular2 pioneers.
- The development process will be very strict and it will require a good knowledge of the project.
- Localization of application will be easier with the implementation of the shadowDom.
- Debugging templates will be easier because they will raise runtime exceptions.
- The code needs to be built before deployment. This will slow down the process but will spot code errors and typos.
- Gaps between browsers implementations of new standards will be handled by specific libraries (Angular2 will emulate the shadowDom).
- It is in an Alpha version. It means that the inner structure could (and it will) be subject of big breaking changes.
- The API is not stable yet (breaking changes will be introduced).
- Not all features are implemented yet (you will have to reinvent the missing parts and then once they get officially implemented, your custom workarounds will be obsolete and probably not as optimized and not as good as Angular2).
- Not enough documentation. Also not enough code examples on the web, so much work will be pioneering.
- Ecosystem is not there yet (not all libraries and tools are ported yet). For example: there is an alpha of Bootstrap; Foundation isn’t there yet; the router is not ready yet. The lack of availability of convenient libraries may mean more development.
- Both versions will remain on the market and both of them will be actively developed.
- The team is still thinking about "how to do things for Angular2".
So that's what we came up with. Of course there is no ultimate answer and surely Angular2 will be a good tool once it's ready. But before that happens, we think it's probably best not to use it for serious projects that need to go into production.
Speaking of framework readiness here is an appropriate comic from Commitstrip that hits the spot.
I just recently started working on a small pet-project to keep sharpening my skills on the whole stack in my free time. One of the things I decided to go with is angular-material. It's an official Angular implementation of Google material design and it uses flexbox layout instead of the grid layout we're used to seeing in frameworks such as Bootstrap or Foundation. We talked a bit over it and since we will most probably use angular on most of our future projects, we decided to keep an eye on the whole thing so we could start using it when it feels production-ready.
Big data has been a hot topic for some time in our Industry, and Dovetail has of course been exposed to the challenges of creating, consuming and reporting on large data sets. InfluxDB is our new friend on this journey.
Without going into the details of the project - Kit, John & Trevor have been working with a new client recording large amounts of information from sensors, then creating valuable, responsive reports on the fly for end users.
We created the initial prototype using SQL Azure, but quickly found this would not perform sufficiently as a long term solution. The team did some research into time-series databases and settled on InfluxDB as a great next step.
InfluxDB is developed specifically to handle time series data and automatically summarizes data at specified intervals, resulting in much faster reports from large datasets.
Since we're all now humming the song now, we might as listen to it - https://www.youtube.com/watch?v=GDpmVUEjagg (Great tune).
It took me a bit of time to upgrade certain part of the site (mostly the blog), but it was worth it as we're finally running our own website on the latest version of umbraco.
I posted about the release of version 7 almost a year ago. The main upgrade is to the backend UI which has been given an overhaul. There is new functionality we still need to take advantage here, but I'm liking what I've seen so far.
I've started collecting links that go past my Twitter feed, Facebook stream, Slack channels and many other news sources (read: distractions). These are my notable saves from this week. P.S. this won't be weekly, I'm not a machine!
Designing for Website accessibility. A nice checklist of, mostly obvious, but often overlooked details on designing for visual impairments and other disabilities. The internet is is like any other public space. It should be open and accessible to all who wish to use it.
IBM's new smart chip. We still haven't figured out why our brain is so powerful. Okay, we (not me, scientists) have a fair idea why it's powerful. I find it interesting that the faster chips are the ones trying to mimic a neural network.
The feature guy. You don't always make software better by adding features. Sometimes, taking away features or polishing existing functionality is a better use of time.
Balancing bike stations. A discussion cropping up here a lot is the difficulty in keeping the Dublin Bikes stations with bikes and also with spaces. Turns out it's a lot more complicated than we thought, and it's not quite been solved yet.
Today, I wanted to create a clone of a SQL Server Azure database. I was looking at various ways of doing this, including exporting and re-importing the database, but thankfully, there is a much easier way.
I ran the following SQL command against the master database on the SQL Azure server:
CREATE DATABASE New_database
AS COPY OF Old_Database
For more details, check out the official documentation and this Idera blog post.