Copy files with PowerShell Remoting

Recently, at work, I found myself in the situation where I needed to copy some file from my workstation to a jump box. Now of course, on Linux I’d just use rsync or scp. But our IT doesn’t like provisioning Linux boxes and therefore uses Windows for jump servers too, so no luck here. Luckily, I could convince them to turn on and allow PowerShell Remoting, so with some simple scripts I can still easily copy files over without using SMB and looking at more hassle with IT.

function Copy-LocalToRemote(
    [Parameter(Mandatory = $true)] $LocalPath,
    [Parameter(Mandatory = $true)] $RemotePath,
    $ComputerName = 'my.default.target.host'
) {
    Invoke-Command -ComputerName $ComputerName `
        {
            param($path, $content)
            Set-Content -Path $path -Value $content `
                -AsByteStream
        } `
        -ArgumentList $RemotePath,(
            Get-Content $LocalPath Raw -AsByteStream)
}

function Copy-RemoteToLocal(
    [Parameter(Mandatory = $true)] $RemotePath,
    [Parameter(Mandatory = $true)] $LocalPath,
    $ComputerName = 'my.default.source.host'
) {
    Invoke-Command -ComputerName $ComputerName `
        {
            param($path)
            Get-Content -Path $path -Raw -AsByteStream
        } `
        -ArgumentList $RemotePath |
    Set-Content -Path $LocalPath -AsByteStream
}

New-Alias -Name 'ltr' -Value 'Copy-LocalToRemote'
New-Alias -Name 'rtl' -Value 'Copy-RemoteToLocal'

As you can see, this is quite simple. Obviously, functions above can only copy one file at a time though. Maybe in the future I’ll build something that can copy entire file structures recursively. I also haven’t spent any time looking at how efficient it is to pass streams this way. In fact, I wouldn’t be surprised at all if this would perform poorly for large files. But then again, I’m mostly pushing around scripts and config files, so this works just fine.

Cheap and Secure Cloud Backups

I’ve wanted to find a good provider of cheap and secure cloud backups for a while. I’ve compared some cloud drive providers, but didn’t quite like those. They usually have very limited free plans, somewhat pricey paid plans (e.g. 50GB for about 24$ a year for OneDrive), or like in the case of Google no information available at all. By the way, “Google one is coming soon” isn’t an announcement that I want to look at for more than a few days when looking for pricing info.

Then, I’ve looked at pricing of cloud storage providers, such as AWS, Azure and Google Cloud. Those offer storage around 1 cent ($0.01) per GB per month. That’s a quarter of the OneDrive cost! It’s even less if you consider their archive offerings (AWS Glacier, Archive in Azure, Coldline Storage for Google). The cheapest offering here is from Microsoft at 0.2 cents ($0.002) per GB per month, but with some usage caveats. Since the point of backups is to keep them for a long time, this quickly adds up though.

Now I’ve written a line or two of code before, so I figured I could as well write my own tool for this. So here is bart, the backup and restore tool. Note that at this point I do not offer bart as a ready-to-use executable, but only as MIT-licensed source code. In addition, bart currently works only with Azure Blob Storage – or with storage mounted into the machine’s file system. However, adding other cloud providers/archive destinations should be relatively easy, given the interfaces used in the tool.

Security

In terms of security, bart encrypts every file before storing it in the archive destination. A user-provided password is used together with a randomly generated salt to derive a key for encryption with AES. On first use of any archive destination, bart generates a random salt, and each archive has its own password and salt. To avoid anybody with access to the archive destination from even snooping the names of your files, the names are hashed (SHA1) and the hashes used to store the encrypted files. This has the disadvantage that renaming/moving a file results in another file in the destination archive, though.

Usage

Once you compiled bart, you can use it as follows.

./bart [-name string] [-path string] [-m noop|restore|delete] -acct string -key string
  -name string
        The name of the backup archive. (default "backup")
  -path string
        The path to the directory to backup and/or restore. (default ".")
  -m string
        A behavior for files missing locally: 'noop' to do nothing, 'restore' to restore them from the backup, 'delete' to delete them in the backup archive. (default "noop")
  -acct string
        The Azure Storage Account name.
  -key string
        The Azure Storage Account Key.

Sources

The sources are on GitHub @ https://github.com/rokeller/bart.

Conclusion

I’ve used bart for backup of some photos/videos for a while now. For the about 42GB I have uploaded so far my monthly bill from Microsoft is about 42 cents ($0.42). Those months where I upload new files the cost is a little higher (a few cents usually) because of the extra transactions. My backed up files are encrypted. If this isn’t cheap and secure cloud backups, what is?

Fix slow kubectl on Windows

Over the last few days I noticed that when I use kubectl to manage a k8s test cluster in Azure, it takes forever to actually carry out the operations remotely. Today I took some time to debug this. Here’s how to fix a slow kubectl on Windows.

Get Verbose Output

I started with changing the log level, and capturing the details, like this:

kubectl get pods -v=20

The good news is, given that the commands worked so slowly, I had enough time to just read what was going on, and even understand where the problem was. If it’s not so slow, it helps to redirect stderr to a file, like this:

kubectl get pods -v=20 2> err.txt

In my case, it turned out that the command was going through a cache which was on the H: drive. That may not mean much to you, but my employer’s IT maps the H: drive to the (remote) home directory. They also set the HOMEDRIVE, HOMEPATH and HOMESHARE environment variables on login. HOMEDRIVE in particular is set to H:. Given that Windows (unlike Linux) by default doesn’t come with a HOME environment variable, kubectl for Windows tries to make up by constructing the HOME path using HOMEDRIVE and HOMEPATH. So kubectl ended up caching everything on a remote share, some 8500 km away. Needless to say, the lag between my workstation and the remote share is noticable.

How to fix Slow kubectl on Windows

So, how do you fix this? Well, it’s actually very easy: set the HOME environment variable to a local directory, run kubectl again, and now it’s a lot faster. In PowerShell, for that session, I just did

$env:HOME = $env:USERPROFILE

Now what’s left for me is to try and convince the IT department to stop using the HOMEDRIVE and HOMESHARE for remote users. That’s the tough part 😉

Lucene.Net.ObjectMapping for .Net Standard 2.0

It’s been a long time since I’ve done some work on my Lucene.Net.ObjectMapping library. Recently I accepted a pull request that added support for the 4.8 beta releases of Lucene.Net itself, but when I involuntarily needed to updated one of my services to bring it up to speed with running in a Docker container, I decided that it was about time to update Lucene.Net.ObjectMapping for .Net Standard 2.0. The last time I used the library in a Docker container, ASP.NET vNext RC1 was just about to become final. so that’s a long time ago. Accordingly, there was quite a bit of work to understand the changes needed: both in .Net (and ASP.NET) between the 1.0 RC1 and the .Net Standard 2.0 releases, and also between the Lucene.Net 3.x and 4.8 releases. Luckily, the latter was largely taken care of by the pull request for the library itself. The former however proved a bit challenging. After all, the toolset has changed significantly.

Updated Sources

To cut a long story short, the updated sources are now available on GitHub. I decided to track it in a separate branch for better isolation. This new branch is aptly called netstandard. I’ll try to stay up-to-date with the more recent releases of Lucene.Net, and also with .Net Standard 2.0. That is, provided that I find the time for it. You may notice that the project files have become quite a bit simpler. That’s certainly one change in .Net Standard and Core that I welcome. The other is the better integration of Nuget for package referencing and package creation/pushing.

Updated Unit Tests

As a side effect, I also figured that it was going to be easier to update NUnit to the latest version, since its toolset is also well integrated with the new dotnet toolset. Since I’m doing all changes through VSCode and with building/testing/packaging in Docker containers based on the microsoft/aspnetcore-build:2 images, I wanted to keep it simple. The good thing here is that the dotnet toolset seems to offer really everything I need for this, and is suprisingly easy to handle, especially when compared to the RC1 version.

Updated Nuget Package

As I’ve mentioned in the beginning, I primarily made this effort because I needed a newer version of Lucene.Net with compatibility for .Net Standard 2.0. As a result, I published a new RC build as a Nuget package too. It is built on the latest Lucene.Net 4.8 beta release and currently supports only .Net Standard 2.0. If there’s a great demand for it, I’ll see if I can add support for other targets – or accept pull requests accordingly.

Conclusions

Nothing much besides the obvious: .Net Standard seems to be in a good shape wrt libraries and toolset, as well as support on Linux. There are a few gotchas but overall nothing much of a problem. Lucene.Net is still somewhat badly documented itself, and the tracking of braking changes between major/minor versions (and in fact also revisions/beta releases of the same major/minor) could be greatly improved. An online documentation would be very useful – maybe it exists, and I just haven’t found it? In any case, skimming through the Lucene.Net sources on GitHub works too, though being much slower.

You can find more information about object mapping for Lucene.Net on the Lucene.Net.ObjectMapping page.