Dynamic Storage Bar (a là iTunes Usage Bar) written in Swift

Screen Shot 2015-02-09 at 11.55.09

It is also possible to add captions underneath the bar. Screen Shot 2015-02-09 at 14.22.12

The usage is really straighforward. Include the ROStorageBar.swift and the Helper.swift file into your project. Create a UIView in the Storyboard and set the class to ROStorageBar. In the viewDidLoad Method you can add ROStorageBarValues with the following code:

override func viewDidLoad() {

        storageBar.add(0.2, title: "Apps", color: UIColor(hex:"#FFABAB"))
        storageBar.add(0.15, title: "Documents", color: UIColor(hex:"#FFD29B"))
        storageBar.add(0.21, title: "Photos", color: UIColor(hex:"#DDEBF9"))
        storageBar.add(0.3, title: "Movies", color: UIColor(hex:"#c3c3c3"))

        // Or if you want to use directly the struct to add an item
        storageBar.addStorageBarValue(ROStorageBarValue(value: 0.6, title: "Backups", color: UIColor(hex:"#A8DBA8")))

        storageBar.unit = "GB"
        storageBar.displayTitle = false
        storageBar.displayValue = false
        storageBar.displayCaption = true
        storageBar.titleFontSize = 10.0
        storageBar.valueFontSize = 10.0

        var numberFormatter = NSNumberFormatter()
        numberFormatter.maximumFractionDigits = 2
        numberFormatter.minimumIntegerDigits = 1

        storageBar.numberFormatter = numberFormatter

The size of the ROStorageBar is defined by the size of the UIView. Therefore also AutoLayout is perfectly working with the ROStorageBar and the adaptation and rerendering is automatically handled by the Library itself. If you have set the displayCaptions to true it does automatically split the view in half. It uses the upper half for the bar and the lower bar for the captions. If there aren’t any captions it takes the full height of the UIView.

UIColor Extension

The extension is only used for easier color creation and can be easily left out. I left it in because maybe someone else can also use the hex to UIColor conversion.

Here a short example:

var color = UIColor(hex:"#A8DBA8")

The library can be found on https://github.com/prine/ROStorageBar

Swift JSON Parsing directly into object structure

Parsing JSON in Objective-C was always repetitive work and not straightforward at all.
With Swift and the work of David Owens (https://github.com/owensd/json-swift) I was able to implement a pretty straightforward JSON to Object mapper.

As basis I am gonna use this very small JSON Example:

 "employees": [
 "firstName": "John", 
 "lastName": "Doe",
 "age" : 26
 "firstName" : "Anna", 
 "age" : 30
 "age" : 45

Next step is to define the object model. I am gonna create two classes: EmployeeContainer and the Employee class.
The EmployeeContainer will contain as Array Employee objects.

The definition is looking like that:


class EmployeeContainer : ROJSONObject {
    required init() {

    required init(jsonData:AnyObject) {
        super.init(jsonData: jsonData)

    lazy var employees:[Employee] = {
        return Value<[Employee]>.getArray(self, key: "employees") as [Employee]


class Employee : ROJSONObject {

    required init() {
    required init(jsonData:AnyObject) {
        super.init(jsonData: jsonData)
    var firstname:String {
        return Value.get(self, key: "firstName")
    var lastname:String {
        return Value.get(self, key: "lastName")            
    var age:Int {
        return Value.get(self, key: "age")

So basically this is the object model and directly the mapping of the properties in the JSON file to the properties in the Swift class.
The following line is using my generic method to fetch the JSON value from the JSON Dictionary which was already prepared by the Libary of
David Owens.

Every JSON supported Datatype can be access directly over the generic Value class.

return Value.get(self, key: "firstName")

Then I have create a BaseWebservice class which is asynchronously loading my JSON file and return it in the callback. I am not gonna show
the code of this BaseWebservice class. But you can find it on my git account: https://github.com/prine

The call of the Webservice and actual converting into the Dataobject is done here (in the ViewController):

var baseWebservice:BaseWebservice = BaseWebservice();
var urlToJSON = "http://prine.ch/employees.json"

var callbackJSON = {(status:Int, jsonResponse:AnyObject!) -> () in
    var employeeContainer:EmployeeContainer = EmployeeContainer(jsonData: jsonResponse)
    for employee in employeeContainer.employees {
        println("Firstname: \(employee.firstname) Lastname: \(employee.lastname) age: \(employee.age)");

baseWebservice.get(urlToJSON, callback:callbackJSON)

Which provides the following output:

Firstname: John Lastname: Doe age: 26
Firstname: Anna Lastname: Smith age: 30
Firstname: Peter Lastname: Jones age: 45

The whole code can be found on github under the following link:

Wikipedia API Objective-C Library

With this very small library your able to load an article from wikipedia api. At the moment three different methods are provided:

// Fetches an wikipedia article from the wikipedia api  - (NSString *) getWikipediaArticle:(NSString *)name; 
// Returns the HTML page from an wikipedia article search by the name  - (NSString *) getWikipediaHTMLPage:(NSString *)name; 
// Return the Main image of an wikipedia article search by the name 
- (NSString *)getUrlOfMainImage:(NSString *)name;

Here is an example of how you should use the WikipediaHelper class:

WikipediaHelper *wikiHelper = [[WikipediaHelper alloc] init];  
NSString *searchWord = @"Elefant"; NSString *article = [wikiHelper getWikipediaHTMLPage:searchWord];  NSString *htmlSource = [wikiHelper getWikipediaHTMLPage:searchWord]; 
NSString *urlImage = [wikiHelper getUrlOfMainImage:searchWord];

You see the main image of the wikipedia article and in the bottom the loaded webview.

You will find the library and the example project in the following git project:


Neural Network as Predictor for Image Coding (PNG)

Research Topic

The main topic was to enhance the current existing PNG – Prefilters http://en.wikipedia.org/wiki/Portable_Network_Graphics#Filtering with a new filter which is internally using a Neural Network to create a better prediction which would lead to a better compression.


Basically the PNG compression is divided in two steps:

  1. Pre-Compression (Using Predictors)
  2. Compression (Using DEFLATE)
In this project only the first step is important. In the following illustration you see the current existing prefilters and how the predictors are storing the difference between the predicted pixel and the real pixel.
Prefilter enhancement with Neural Network Filter

Prefilter enhancement with Neural Network Filter

The current existing filters + the new filter definition:

Type Name Filter Function Reconstruction Function
0 None Filt(x) = Orig(x) Recon(x) = Filt(x)
1 Sub Filt(x) = Orig(x) - Orig(a) Recon(x) = Filt(x) + Recon(a)
2 Up Filt(x) = Orig(x) - Orig(b) Recon(x) = Filt(x) + Recon(b)
3 Average Filt(x) = Orig(x) - floor((Orig(a) + Orig(b)) / 2) Recon(x) = Filt(x) + floor((Recon(a) + Recon(b)) / 2)
4 Paeth Filt(x) = Orig(x) - PaethPredictor(Orig(a), Orig(b), Orig(c)) Recon(x) = Filt(x) + PaethPredictor(Recon(a), Recon(b), Recon(c))
5 Neural Network Filt(x) = Orig(x) - NN(arrayOfInputPixels) Recon(x) = Filt(x) + NN(arrayOfInputPixels)

Neural Network as Predictor

The last filter is my new implementation. It is internally using the Neural Network with an input array of the pixels. As result it returns the predicted pixel value. Like in the other filters the difference between the original value and the predicted value is stored.

But what are exactly these input values I let into the Neural Network Predictor? In the following illustration I try to describe this process of feeding the Neural Network hopefully more clearly.

Basically there are three different parts:

  1. Copied Pixels (RED)
  2. Input Pixel (GREEN)
  3. Predicted Pixel (BLUE)
Input values for the Neural Network

Input values for the Neural Network

Copied Pixels
All the red area will be copied 1:1. My Neural Network Filter is not able to make a prediction out of nowhere. So this is the reason why I have to copy at least the border area of an image. With the current Neural Network layout I am using:

  • 28 input neurons (marked green) – 8×4 pixels minus 4 pixels
  • 1 output neuron (marked blue) – the 29th pixel
So all the pixels in the height of 1 – 3 pixels will be copied. Same at the width. Every pixel which is in the range from 1 – 3 will be copied.

Input Pixels
The first pixel which can be predicted by the Neural Network Filter is the pixel at the position (4, 4).
This pixel can be calculated by the neural network using as inputes all the 28th pixels from above and from the left. You see this pretty clear in the illustration underneath.
Predicted Pixel
All the green pixels are the input pixels which are passed into the Neural Network and as outcome I should be able to predict the blue input pixel.


In this section I wanna describe the created and used components. Basically all code is written in Java.

As first step we have to train the Neural Network. To make this step a little bit easier I have created a Pattern Exporter which is able to create a training/validation set for the JavaNNS tool. A more detailed explanation is given in the following illustration.

  1. Training Images:
    Images used for training the Neural Network
  2. Pattern Exporter:
    Written in Java it cuts out 8×4 pixel areas and creates a .pat file for the JavaNNS Tool. It creates a traing and validation set which will late be used in the JavaNNS Tool.
  3. JavaNNS:
    Is a OpenSource Java Framework to create and visualize Neural Networks. The training and validation set (.pat) can there be loaded and used for training/validation.
  4. compression.net:
    As soon as you got a nicely trained Neural Network you can save it into a neuralnetwork.net file which I will later use in my Encoder/Decoder.
Training of the Neural Network

Training of the Neural Network

After finishing the training of the Neural Network we need to use this created neuralnetwork.net file in the Encoder/Decoder. A detailed explanation is also following in the illustration provided underneath.

  1. Input Images:
    The sample image data which is getting compressed by the PNG Neural Network filter
  2. PNG Encoder/Decoder:
    Encoding and Decoding the image with internally using the Neural Network as Predictor
  3. Neural Network:
    I’ve developed in Java to receive a prediction
  4. JNNSParser:
    Is another Java Class which is able to parse an existing neuralnetwork.net file and create out of that a neural network
  5. Output Images:
    As output it should give us compressed images which are smaller than the original images
Encode and Decode with the Neural Network Predictor

Encode and Decode with the Neural Network Predictor

For Encoding/Decoding im using the library pngj. You can find it here:




There are many ways to configure the Neural Network:

Neural Network Design

Neural Network Design

Possible ways to setup the Neural Network

  • Amount of Input Neurons
  • Layout of the Input Neurons
  • Amount of Hidden Neurons
  • Amount of Hidden Layers
  • Activation Function
  • Learning Algorithm
  • and so on..
I will provide here some of my evaluated optimal values for the design of the Neural Network. Basically I’ve evaluated them by simply testing it with several image sets and many test rounds and then calculated the bpp (bits per pixel) of the Neural Network set up and determined the best parameters. This brought me to the following result:
Evaluating the Neural Network Design
Evaluating the Neural Network Design
Evaluated Setup of the Neural Network
  • Amount of Input Neurons:
    28 Input Neurons
  • Layout of the Input Neurons
    8×4 Layout
  • Amount of Hidden Neurons
    – 3×3 = 9 Neurons
    – 5×5 = 25 Neurons
  • Amount of Hidden Layers
    1 Hidden Layer 
  • Activation Function
    Sigmoid activation function wich clipped range, transform values into the range of 0.2 to 0.8 
  • Learning Algorithm
Comparing with the other existing PNG Filters:
As a next step I’ve compared my Neural Network filter with the other existing PNG Filters (none, up, sub, average and paeth). I did the test with several images sets (containing artificial images or containing natural images). Underneath you see a benchmark of the average bpp (bits per pixel) of the 26 images.
Benchmark with other PNG filters

Benchmark with other PNG filters

You can see that my Neural Network Filter is compression slightly worse than the paeth and average filter. But it is much better than the up and the sub filter. After this benchmark I did another one with a bigger image set of 111 natural images. I wanted to know on which pictures my Filter is good and on which filter it is bad.
Here are some pictures I am compressing better than all the other filters:
Picture the Neural Network is compressing better
Picture the Neural Network is compressing better
I wasnt sure what these picture have in common. Well there are many flowers. So possibly my Neural Network really likes Flowers. But I wasn’t very comfortable with that explanation🙂. So my other conclusion was that my neural network is especially good when the picture contains the following characteristics:
  • A lot of textures
  • Different structures
  • Not a lot of noise
As a next step I have looked through my holiday pictures to find a picture with the named characteristics and I run the test again. I came to the following result:
Waterfall (Picture NN should compress good)
Waterfall (Picture NN should compress good)
As result I got the following bpps form the 6 Filters:

Filter: None => 7.289
Filter: Sub => 6.681
Filter: Up => 6.667
Filter: Average => 6.433
Filter: Paeth => 6.486
Filter: NN => 6.368

So my these about the textured, structured pictured was somehow confirmed. How you see in the results my Neural Network had the best compression rate.

Comparison of natural and artificial images

Another benchmark I wanted to do was using different training/validation images for training the network and check the impact on compression a set of natural or artificial images. In the following illustration you can see the results:

Comparison natural vs artificial

Comparison natural vs artificial


  • There is a lot of potential. I didn’t have a lot of time to find the perfect set up for the Neural Network. Maybe If you would specialize on determining the perfect setup you could maybe get results where the Neural Network Filter is beating all other filters on the average benchmark
  • Maybe also using a total different structure of the Neural Network would maybe bring an improvement. I was thinking on recursive Neural Networks…
  • Another option would be using the Neural Network Filter for a specific type of images and train the Network only which this type of images.
  • Performance wasn’t a point I’ve worked on. It is clear that the other filters are performing much much faster than my solution.
The project is now under git hub. Have a look:

TextMate Command – Duplicate current selected file

How to configure in TextMate Bundle Editor:

How to configure TextMate Duplicate plugin

Here the code:

#!/usr/bin/env ruby -w

require „ftools“
require „#{ENV[‚TM_SUPPORT_PATH‘]}/lib/textmate“

selected_file = ENV[‚TM_SELECTED_FILE‘]

splitted_filename = selected_file.split(„.“)
extension = splitted_filename[splitted_filename.length-1];

new_filename = selected_file.dup
pos = selected_file.length-(extension.length+1)

new_filename.insert(pos, „_copied“)

File.copy(selected_file, new_filename)

puts „Successfully copied the file: ‚${selected_file}'“
puts „FAIL!“