My second week of UROP: Swift, JSON and more!

Last week I managed to port most of the code for my iOS application to Android (phone version only). Now that this is done, I've started working on the next stage: reading a JSON (JavaScript Object Notation) file describing a system and then using that to generate a model.

To maximise my learning of new languages I decided to use Apple's new language, Swift, as the language to implement these new features in the iOS app. Unfortunately this means the update won't be able to be released until iOS 8 is released and will only work for users who update to iOS 8.

So, what did I think of Swift?

Swift was easy to pick up. Calling itself a modern programming language, it certainly fits the criteria. However, it was also clear that this is still a language in development. In order to work properly with Objective-C it requires subclassing NSObject, something that the documentation suggests is not required! (I have submitted a bug report about this)

One of the modern features of Swift that I've fallen in love with already is support of tuples. A tuple can be a variable, the output of a function or used anywhere else you might need it to be! Very useful.

JSON and Swift / Java

Both Swift and Java provide a built in class to parse JSON from a string with just a single line of code. In this update to the Feedback app, a JSON string is used to describe the system to be modelled by the app. The semantics of this file are to be discussed in another blog post, but an example can be seen below.

Creating the User Interface (UI)

A major feature in this version of the app I am developing is the UI being generated based on the JSON model. In iOS this is a simple case of generating the UI, positioning items based on screen size. An exhaustive test can then be done through testing the generated UI on both the 3.5" and 4" iPhone screen configurations. If it looks right on these, then I know for sure that it will work on all currently available iPhone devices.

As an example, here is the JSON for the basic model that is in the currently released iOS app:

{
    "name": "basic",
    "model":
    [
        {
                "controller": {
                "block": 11
            }
        },
        {
            "device": {
                "block": 9
            }
        },
        [
            {
                "sensor": {
                    "block": -0.2
                }
            }
        ]
    ],
    "hasDisturbance": true,
    "description": "This is a model of a feedback system. By moving the sliders you can adjust the input I and the disturbance D. By tapping the block paramaters they also can be adjusted. The type of feedback system being modelled is shown below.\n\nThis app is designed to allows you to see how feedback can be used to modify input"
}

Currently this produces this UI:

Deconstructed iOS Feedback App

As you can see, the description of the model is not yet in place, nor are the labels for the input, output and disturbance sliders. However, the blocks, summing junctions (cricles), lines and disturbance slider are all being generated and positioned based on the model described in the JSON file. There are still a few tweaks to make; for example, the disturbance slider is not aligned over its summing junction correctly and this alignment issue varies between the iPhone 4s screen and the iPhone 5/5s sizes.

When it comes to Android you would think this is rather different. Android devices come in all different shapes and sizes, meaning doing exhaustive testing of screen configurations is not possible. However, after reading a blog post about Android screen fragmentation I came to the conclusion that developing the UI for Android would be just as easy. As my application is running in the landscape screen orientation, the width I'm working with is 640dp. By using this standardised width I can develop the application in the same way and then test on a variety of screen sizes in the simulator to verify that the layout is generated correctly.

So what's next?

So, next week, I shall be finishing up the auto-generation of the UI. After this, I will be working on supporting multiple types of input and output. For example, inputting a sine wave or square wave.

Comments