The React Campaign: 1. Bootstrapping

von Patrick Bédat, 20. November 2017

After diving into React while programming a simple RSVP app for our wedding, my plans to introduce React into our company frontend stack are gaining momentum. Another reason is the fact, that we are stuck in AngularJS 1.4, because we can’t upgrade to the current version (breaking changes) due to a tight schedule. Upgrading to Angular >= 2 is not an option. (I want to spare you my ramblings – so here’s a biased but inspirational post).

To make React more approachable I decided to create an example repository. The problem with most examples is… Webpack, which is as powerful as it is complicated. So I setup a bootstrap example with just npm, jspm and the typescript compiler: https://gitlab.com/pbedat/ixts-react/tree/master/1_react-minimal-setup

Just fire it up:
git clone https://gitlab.com/pbedat/ixts-react.git && cd ixts-react/1_react-minimal-setup && npm install && npm run serve
(you will need have nodejs and git installed)

From there you can try the app (http://localhost:8080) and play with the code (it will be recompiled automatically).
Thats all for now, but I will be back with some mind blowing React sugar very soon. So stay tuned!

Finally TypeScript

von Patrick Bédat, 17. Juni 2016

I’ve started my life as a developer some 14 years ago with PHP. When I started my career as a profressional developer three years later, I got in touch with statically typed languages and it felt like a rebirth. Intellisense, code completion and all the other utilities multiplied my overall hacking performance.

I was a web developer from the very beginning and when AJAX hit the mainstream, I was again confronted with a dynamic language.
But this time it was different: This time I loved working with a dynamic language. I think it’s because you can get things working incredebly fast – no type definitions, no mapping no adaption required, and the heavy lifting logic resided on the statically typed server code.

I still love JavaScript and I still feel very productive when bending it to my will. But… problems arise, when other developers join in on a former solo project, or you have to debug/extend code, that is really old.

Hacking and Programming

My definition of hacking is:
To write a minimum amount of code with acceptable quality, that is required to fulfill a requirement. Acceptable means: Structured, decoupled and easy to refactor or replace.

When you write a lot of JavaScript this way, you are automatically building a noteable amount of technical debt. First and foremost, because it is not documented. A good JavaScript library is only as good as it’s documentation. Some might argue, that good code documents itself. Tell this your coworker, who was busy programming Delphi software for the desktop…

And programming…
is everything else above that (There’s a lot good lecture about software craftmanship and quality!)
In the case of JavaScript I think of using prototypes, manual type checking, a solid documentation, unit tests, …

But even when you program JavaScript professionally and disciplined, you don’t get the goodies of a statically typed language!

Epiphany

I always believed, that switching to a statically typed language, that compiles to JavaScript, would be damn awesome… when there would be tools, that support it. But whenever I thought about introducing it in the company, the argument, that JavaScript is just good enough and the effort to establish such a big change would cost too much, weighed more.

But then some events changed my mind:

  • I witnessed how hard it was for new team members to work with the existing code. Not being able to browse the code by reference or through code completion is a serious handicap (without docs)
  • I’ve successfully introduced es6. Everybody was suddenly able to write es5 and es6 code side by side. TypeScript is like a static version of es6.
  • Angular 2.0 was completely written in TypeScript
  • There is a official TypeScript Package for the Atom Editor

On a saturday I was commited to spend the evening with integrating TypeScript in a new module of a project. It didn’t even took me an hour. I was thrilled. Writing type safe code on the client side doesn’t only scale – it rocks!

Working with Atom

TypeScript has an official Atom Packge: apm install typescript

To setup your TypeScript environment you should create a tsconfig.json (https://www.typescriptlang.org/docs/handbook/tsconfig-json.html) in the root of your project.

Mine looks like this:

{
"compileOnSave": false,
"compilerOptions": {
    "module": "commonjs",
    "noImplicitAny": true,
    "removeComments": true,
    "preserveConstEnums": true
    }
}

TypeScript allows you to go dynamic whenever you want. When you specify „noImplicitAny: true“ you have to type your dynamic with the „any“ type, which is useful, because in JavaScript you aren’t used to define types at all.

So I wrote a class:

Oh it feels so good 😀

So I want to integrate the service to my angular 1.x application. What? You say angular 1.x is not written in TypeScript? Embrace Typings!

atom2

You can write so called definition types in TypeScript, to program against typesafe interfaces of your untyped JavaScript sources. Typings is a package manager for TypeScript definitions!

Once installed (npm install -g typings) you can search for definitions (e.g. typings search jQuery) and install them (typings install –global jQuery –save-dev). This is how a definition file looks:

2016-06-17 16_10_30-Clipboard

Typings will save all the definitions in the „typings“ folder and persist your installed dependencies in a typings.json. All definitions are referenced in the typings/index.d.ts file, making them easily accessable in your code:

reference

Gulp Integration

I wanted the integration of TypeScript to be as smooth as possible. So sneaky, that other members of the team won’t even notice, that they are able to write TypeScript side by side with existing code.

This is the updated gulp task:

gulp

„ts“ refers to ts = require(„gulp-typescript“)

And that’s it basically.

Conclusion

I can’t wait to write more TypeScript. So many new opprtunities, like generating definitions from our RAML spec or from C# classes… The opportunity to upgrade our codebase to a whole new level of quality.

via GIPHY

Dealing with Azure’s StorageException: Object reference not set… using mono

von Patrick Bédat, 17. April 2016

TL;DR

This article will show you
– how to utilize docker to make a bug reproducible
– how to solve the problems with the Azure Storage Client library

The bug

When our client told me we had to export some files to the Azure Storage of some of his clients I thought: Awesome! I finally get in touch with Azure. Getting in touch with cloud services is almost everytime a good chance to get some new impressions on how to tailor APIs for the web. But it turned out different this time…

So I added the WindowsAzure.Storage package from NuGet and was thrilled how easy the implementation was:

var account = new CloudStorageAccount (new StorageCredentials (accountName, accountKey), true);

var blobClient = account.CreateCloudBlobClient();

var container = blobClient.GetContainerReference(containerId);
container.CreateIfNotExists();

var blob = container.GetBlockBlobReference(Path.GetFileName(file));
blob.UploadFromFile (file);

Charming isn’t it? And it was until files got bigger. When working for Media Carrier we are often pushing gigabytes of data over the wire and that’s where the problem started:

Microsoft.WindowsAzure.Storage.StorageException: Object reference not set to an instance of an object
---> System.NullReferenceException: Object reference not set to an instance of an object
  at System.Net.WebConnectionStream.EndRead (IAsyncResult r) <0x41fc7860 + 0x0009e> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Core.ByteCountingStream.EndRead (IAsyncResult asyncResult) <0x41fbc1c0 + 0x00024> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier`1[T].ProcessEndRead () <0x41fd7f10 + 0x0003b> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier`1[T].EndOperation (IAsyncResult res) <0x41fd71c0 + 0x00067> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Core.Util.AsyncStreamCopier`1[T].EndOpWithCatch (IAsyncResult res) <0x41fd6e80 + 0x00073> in <filename unknown>:0 
  --- End of inner exception stack trace ---
  at Microsoft.WindowsAzure.Storage.Blob.BlobWriteStream.Flush () <0x41fdd180 + 0x0007b> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Blob.BlobWriteStream.Commit () <0x41fdcf40 + 0x00023> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Blob.BlobWriteStream.Dispose (Boolean disposing) <0x41fdced0 + 0x00043> in <filename unknown>:0 
  at System.IO.Stream.Close () <0x7f1e247c53d0 + 0x00019> in <filename unknown>:0 
  at System.IO.Stream.Dispose () <0x7f1e247c5400 + 0x00013> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.UploadFromStreamHelper (System.IO.Stream source, Nullable`1 length, Microsoft.WindowsAzure.Storage.AccessCondition accessCondition, Microsoft.WindowsAzure.Storage.Blob.BlobRequestOptions options, Microsoft.WindowsAzure.Storage.OperationContext operationContext) <0x41fc9e10 + 0x009e0> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.UploadFromStream (System.IO.Stream source, Microsoft.WindowsAzure.Storage.AccessCondition accessCondition, Microsoft.WindowsAzure.Storage.Blob.BlobRequestOptions options, Microsoft.WindowsAzure.Storage.OperationContext operationContext) <0x41fc9db0 + 0x0004b> in <filename unknown>:0 
  at Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.UploadFromFile (System.String path, Microsoft.WindowsAzure.Storage.AccessCondition accessCondition, Microsoft.WindowsAzure.Storage.Blob.BlobRequestOptions options, Microsoft.WindowsAzure.Storage.OperationContext operationContext) <0x41fc9cc0 + 0x00097> in <filename unknown>:0 
  at windowsstoragebug.MainClass.Main (System.String[] args) <0x41f1cd60 + 0x004f0> in <filename unknown>:0 
Request Information
RequestID:c4dedc47-0001-0038-734d-962acc000000
RequestDate:Thu, 14 Apr 2016 12:56:26 GMT
StatusMessage:Created

And my first thought was: „What did I do wrong?“. So I began messing around with the client settings:

blobClient.DefaultRequestOptions.ServerTimeout = new TimeSpan (1, 0, 0);
blobClient.DefaultRequestOptions.MaximumExecutionTime = new TimeSpan (1, 0, 0);
blobClient.DefaultRequestOptions.SingleBlobUploadThresholdInBytes = 67108864; //64M

Nock luck. So I googled: https://github.com/Azure/azure-storage-net/issues/202. It is a similar issue, but the proposed workarounds didn’t help.
It was time to dive a bit deeper into the Azure Storage service. How does it work?

Azure Storage and PutBlock

Azure storage (similar to S3 in AWS) can be used to store files. The files are stored as blobs – block blobs in this case. Block blobs are organized in containers (would be buckets in S3).
You can either upload a BlockBlob in a single transaction – when the block blob is < 64M, or upload the blob separated in blocks, each < 4M with a maximum of 50000 blocks. Each block gets a unique block id in natural order. When all blocks have been transmitted to azure you commit the transaction by sending the whole list of block ids.

See https://msdn.microsoft.com/de-de/library/azure/ee691974.aspx

I felt lucky, when I saw, that the azure client offered a PutBlock and a PutBlockList method. So I tried to upload the file in chunks:

using(var stream = File.OpenRead(file))
{
    int position = 0;
    const int BLOCK_SIZE = 4 * 1024 * 1024;
    int currentBlockSize = BLOCK_SIZE;

    var blockIds = new List<string>();
    var blockId = 0;

    while(currentBlockSize == BLOCK_SIZE)
    {
        if ((position + currentBlockSize) > stream.Length)
            currentBlockSize = (int)stream.Length - position;

        if(currentBlockSize == 0)
            continue;

        byte[] chunk = new byte[currentBlockSize];
        stream.Read (chunk, 0, currentBlockSize);

        var base64BlockId = Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(blockId.ToString("d5")));

        using(var memoryStream = new MemoryStream(chunk))
        {
            memoryStream.Position = 0;                  
            blob.PutBlock(base64BlockId, memoryStream, null);
        }

        blockIds.Add(base64BlockId);

        position += currentBlockSize;
        blockId++;

    }

    blob.PutBlockList(blockIds);
}

I ran it on my machine and started to burst into dancing. Not for long though… After deploying it, uploading suddenly stuck. Then on my maching (and later on the staging server) it suddenly gave me those lines:

_wapi_handle_ref: Attempting to ref unused handle 0x4af
_wapi_handle_unref_full: Attempting to unref unused handle 0x4af

which means that something is very very wrong…

Challenge accepted

When a 3rd party library isn’t working correctly, I’m usually trying to root out possible errors on my side. So after 2 a.m. I did some funny things with memory streams, that I really don’t want to show you. So after my PutBlock experiment didn’t work out my motivation started to multiply. I cannot rest when some 3rd party library, which is supposed to just work, simply doesn’t bend to my will.

My next plan was to implement parts of the storage client against the azure REST api.

Being a proud and arrogant developer, I believed, that I could hack that WebRequest code for azure together in minutes… Behold the Authorization header. What a pain in the ass seriously. Just take a look at this: https://msdn.microsoft.com/de-de/library/azure/dd179428.aspx

Then I came across this beatiful article.

With the help of the article above I was finally able to write my own implementation of PutBlock (I snatched CreateRESTRequest from the article above):

var request = CreateRESTRequest ("PUT", $"{container.Name}/{blobId}?comp=block&blockid={base64BlockId}", chunk);    

var response = request.GetResponse () as HttpWebResponse;

using (var responseStream = new StreamReader(response.GetResponseStream ()))
{
    var output = responseStream.ReadToEnd ();
    if (!string.IsNullOrEmpty (output))
        yield return output;
}

AND IT WORKED LIKE A CHARM. And still does.

Making the bug reproducible

The journey doesn’t end here.

The azure storage lib is open source and whenever you are using open source software you should give something back from time to time. Either through contributions, money or by being a good bug reporter.
Filing an issue on github is easy, but if you want to have it fixed, make sure it is easily reproducible. This saves a lot of time for the hard working open source contributors.

So how do we get there? The bug happened on my Ubuntu linux 14.04 running mono Stable 4.2.3.4/832de4b. Telling somebody to setup a VM, checkout my sample code, compile it and run it, would be a lot to ask.
That’s where Docker comes into play.

To run the desired environment, we have to write a Dockerfile:

# Define the base image
FROM ubuntu:14.04
MAINTAINER Patrick Bédat <patrick.bedat@ixts.de>

# Add the mono sources
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
RUN echo "deb http://download.mono-project.com/repo/debian wheezy main" | tee /etc/apt/sources.list.d/mono-xamarin.list

# Install mono and git
RUN apt-get update && \
    apt-get install -y \
    git mono-complete

RUN mozroots --import --sync

# Clone the sample project and build it
RUN git clone git://github.com/pbedat/azure-mono-bug.git && \
    cd azure-mono-bug && \
    mono NuGet.exe restore && \
    xbuild azure-storage-bug.sln

# This tells docker what to do when we run this image
ENTRYPOINT ["mono", "/azure-mono-bug/azure-storage-bug/bin/Debug/azure-storage-bug.exe"]

From this dockerfile you’re able to build an image:

docker build -t azure-mono-bug ./azure-mono-bug

Then you can run containers based on this image

docker run azure-mono-bug <account-name> <account-key>

The application then
– creates a 500 MB file
– tries to upload it with UploadFromFile to azure
– tries again with the PutBlock method

Conclusion

I’ve pushed the image to the docker hub. Now anybody using docker, can run the sample app by typing

pbedat/azure-mono-bug <account-name> <account-key>

Nuff said.

KISSing Markdown

von Patrick Bédat, 28. Juli 2013

Sharing is Caring

Vor kurzen erstellte ich ein Stellenangebot in Word und wollte mir die Meinung meiner Kollegen dazu einholen. Aber ein Word-Dokument per Anhang teilen? Nein, da könnte ich ja genau so gut ein FAX schreiben. Also rein damit in unser Corporate Google Drive, für die Kollegen freigeben und auf Feedback warten.

Im nächsten Schritt sollte die Anzeige natürlich auch auf unsere neue Homepage. Gerade als ich das erste <h1> wieder geschlossen habe hat sich die DRY-Düse in meinem Hinterkopf zu Wort gemeldet: „Don‘t repeat yourself!“ zischte es leise, aber in einem scharfen Tonfall.

Keep it simple, stupid

Wir nutzen glücklicherweise kein überproportioniertes CMS, das Google Docs Inhalte auf automagische Weise in unsere Seite integriert, sondern ein leichtgewichtiges selbstgebautes Framework, zum Hosten von statischem Inhalt. Ich suchte also ein Format, das ich meine Kollegen reviewen lassen kann ohne sie mit spitzen Klammern zu verletzen und das sich trotzdem leicht als Content in unsere Webseite integrieren lässt.

Zugegeben, wirklich suchen musste ich nicht. Markdown ist mittlerweile zu meinem Standard für jegliche Art von Textdatei geworden und ich war auch wenig überrascht, als ich feststellte, dass ich schon vor langer Zeit eine Markdown-Erweiterung in mein Visual Studio integrierte.

Visual Studio bietet zahlreiche Erweiterungen für Markdown an. Aber aufgrund der sehr gut lesbaren Syntax, sind diese eigentlich überflüssig.

Visual Studio bietet zahlreiche Erweiterungen für Markdown an. Aber aufgrund der sehr gut lesbaren Syntax, sind diese eigentlich überflüssig.

Ubiquitous awesomeness

Markdown, ein fast schon allgegenwärtiger Standard im Web (StackOverflow, Git, BitBucket, etc.), rendert wunderbar lesbaren Markup-Code in HTML. Ich erstelle also mein Stellenangebot nochmal im Markdown-Format und siehe da:  Aus unserem Mini-Framework wird so eine Art Content Management System!

Die Markdown-Datei wird von einer HtmlHelper Extension Method gerendert.

Und heraus kommt ein tolles neues Stellenangebot!

Einziger Wermutstropfen für mich als Confluence User: Atlassian hat sich irgendwann entschieden den Markdown-Support für Confluence zu entfernen. Argument: Für Leute mit nur wenig technischen Bezug sei Markdown wohl zu avantgardistisch 😉