Skip to content

The Amazing Adventures of a Sexy Software Engineer Posts

A journey of thousand miles…

This week I spent some time adding a media gallery to the product I’m working on. You know, one of those things that mimic the native Photos user experience, where you can swipe to left and right to navigate to the next item, zoom in and out a photo… The usual suspects.

So, in order to move fast we decided to rely a third party library for it, instead of writing it from scratch.

We already had a well defined model object to represent a media item (photo or video). But the third party library that we chose requires its own model object, which makes total sense.

Also, it is possible we might end up switching to a different library in the nearly future, so it makes sense to have a clear boundary between our code and the third party library. I call it a boundary, some people call it a wrapper, some other people would call it an abstraction layer, but the point is that we don’t want to scatter references to this third party library across our codebase, we want those references to be isolated, encapsulated in just one class.

To the point

First, let’s assume our model object, the one in our codebase, looks like this:

The model object required by the third party library looks like this:

We could discuss if having a property to mark a MediaItem as an image or a video is an indication that MediaItem is the wrong abstraction, but that’s beyond the scope of this post (hint: I think it is the wrong abstraction, that any kind of type property in any class or structs suggests that there we are trying to model two different things with just on type)

Let’s say that boundary is a class Called MediaBrowser, that, first of all, transforms our model objects to instances of the model objects required by the third party library:

Now, that code might look quite straight forward, but in plain english, it will read like this:

For each instance of MediaItem, create an instance of GalleryItem, having in mind that, when creating an instance of GalleryItem, first, we need to check if MediaItem is a photo or a video. If it is a video, we need to pass to GalleryItem’s initialiser the value of the videoURL property in MediaItem, mark it as Video. However, it MediaItem is of type Image, we need to provide its imageURL to GalleryItem’s initialiser, as well as mark GalleryItem as Image.

It’s not rocket science. But describing what galleryData() code does requires a paragraph. And every single time you read this function, you need to translate it into that paragraph in order to understand what it is doing.

That happens because the code is quite imperative . There are a lot of specific, unnecessary details in there. Unnecessary because we don’t need to be that specific to describe what we expect the galleryData() function to do.

This is what we expect: we want to map each instance of MediaItem to a brand new instance of GalleryItem. Simple as that. However, the code, again, is telling a different, more complex, story, with plenty of details that we don’t need to be made aware of every time we read it.

The declarative solution

One of the things I like most about Swift is that it provides multiple ways to model abstractions, and multiple tools that can be used to write more declarative code.

For example, we can declare initialisers in extensions. So, we could do something like this, to extend the GalleryItem object (remember, this object is part of a third party library, and therefore out of our control):

And now, this would be our galleryData() function:

This function, now, clearly express my expectation: I want to create a new GalleryItem for each MediaItem. Period. I don’t care about how that happens, I don’t care about the details, I just trust GalleryItem to do the right thing and create an instance of itself correctly.

We have turned the previous, imperative code into declarative code.

Now, in plain English the galleryData() function reads:

For each instance of MediaItem, create an instance of GalleryItem.

Rinse and repeat

In the scope of this post, there is not a huge difference in terms of lines of code, or in terms of code complexity between both solutions. This, per se, is not going to turn complex code into simple code, buggy code into robust code.

But now, imagine applying the spirit of this approach, using the tools Swift provides to write declarative code, every single time there is an opportunity to do so. That would make a significant difference. Because a journey of a thousand miles begins with a single step.

Leave a Comment

On unit testing private behaviours

TL;DR; every time I find myself in a situation where I need to unit test a private behaviour, I consider that a clear indication that I might need to rethink my design.

Let me tell you a story

Some time ago, in what now seems like a galaxy far far away, I was introduced to unit testing by means of someone adding the following requirement to the project my team was developing: “code must be unit tested”.

As you can probably guess by the hint of sarcasm in the previous paragraph, unit testing, like almost everything else in life, is something you learn, not something you are born with. Which was painfully obvious at that time, as I have already written about.

But the thing is, a couple days ago I finally had the opportunity, since I was trapped in a plane for 13 hours, to catch up with some reading. In particular I was finally able to devour Sandy Metz’s new book.

The book is still unfinished, but I highly recommend grabbing a copy, and reading it cover to cover, as soon as possible.

There are plenty of potential quotes in that book, but I would like to focus on one in particular:

… the first step in learning the art of testing is to understand how to write tests that confirm what your code does without any knowledge of how your code does it.

Tests always tell at least two stories.

Unit tests always tell at least two stories. One, specially if the tests are well written, is a cristal clear description of the expectations you have about the behaviour of the code under test.

The second story, though, is usually one that nobody likes to hear. If your tests need to manipulate internal behaviour of the code under test, in order to test your external expectation of that code, then you need to rethink your design. And allow me to illustrate this with an example, that links with the introduction to this very same post you are reading.

When testing goes wrong.

Back to my story: no one in our team had any experience testing, so at the time we did not recognise the code smell, but here is what we were trying to do.

We had this one class that was supposed to be a one to one mapping to a RESTful API, what in the classic three-tier architecture would correspond to the data tier, service layer, transport layer, whatever you call it. The point is, this was the class that performed networking operations against a RESTful API. Let’s call it MediaService.

Part of the expected behaviour of MediaService was that it should cache data, using a key-value cache. This data cache would expire after a given time.

So, the flow would be like this: with every request, we would first check if there was data cached and not expired. If there was data cached, MediaService would return it, if there was no previously cached data or the existing data was marked a expired, MediaService would attack the network, fetch data, and add it to the cache, associating a timestamp to each key-value pair.

So, the next time we wanted to request data to the same endpoint, we would check the cache again, providing a timestamp. The cache would calculate if data was expired or not… rinse and repeat.

In code, that looked similar to the following snippet (those still were the Obj-C times, so this is a rough translation to Swift of my recollection of the events):

Well, not very idiomatic Swift code, of course, but again, those were the dark days before generics.

Now, if you wanted to test that cache was ignored when cached object have expired… how would you do that?

Dependencies

I don’t recall who said it, but someone said that good software engineering is all about arranging the code in a way that complexity does not collapse on you.

That usually implies breaking down your codebase, organising it into separate subsets or structures. Since you are already breaking down your code and organising it in smaller chunks, you might as well make each of those chunks contain code that is related, that deals with one concern. So, instead of one big ball of code, you would end up having smaller balls, each one of them taking care of one and only one smaller part of the big problem your software is trying to solve. (That’s how I visualise cohesion, by the way)

But of course, the problem your software tries to solve is bigger than each and everyone of those small, cohesive structures that you have used to organise your code. So they need to collaborate with each other.

And here is where dependencies are born. One block of code needs to collaborate with another block of code. One part of your solution needs to collaborate with another part of your solution. Or, if you will, one part of your solution depends on another part of the solution.

The tricky part here is how to set up those dependencies in a way that do not defy the whole purpose of breaking down your code into smaller structures. The key is setting up those collaborations in a way that the collaborators are still independent from each other.

Depending on abstractions not concretions

Yep, that’s kind of the definition of the Dependency Inversion Principle.

In the previous code sample, MediaService needs TimeCache. The key here is that it actually depends on the existence of TimeCache, because MediaService creates TimeCache, therefore the behaviour provided by TimeCache is, in a way, embedded within MediaService. Or, the MediaService abstraction depends on the implementation details of TimeCache.

But, why does that matter?

Firstly, MediaService is tightly coupled to TimeCache. So coupled to it, that it actually needs to create it in order to make itself functional.

Secondly, and due to the previous point, it is not possible to modify the behaviour of TimeCache, without modifying MediaService.

Thirdly, there is a more subtle issue here. The fact that there is a cache, and the fact that said cache expires, is not part of the public API of the MediaService class. It is an implementation detail, buried into the MediaService class code, but an implementation detail that is so significant that leaks outside its container. Because we know MediaService handles a cache that marks items as expired, because we expect actually expect that, however, the expectation is not declared anywhere in MediaService’s public interface.

So now, when it comes to unit testing, we face a very interesting problem: we need to test a behaviour that is a private implementation detail, but since we need to test it, it is not private anymore, it is a well-known expectation, but an expectation that we can not test because it is a private implementation detail. You see the infinite loop here, right?

And notice how I have not even mentioned networking…

No worries, mocks, stubs and spies can help!

Well, not anymore. In the old Objective-C days, we could just declare a category in the testing bundle to expose and override any private property of a class with a mock (or stub, or spy, to be honest, I never really understood the difference).

In this case, we could override cache and network, provide mocks, set our tests, and go on our merry way.

The problem with that is that we would still be testing private behaviour. And why should not do that?

The reason, is again, subtle. When we test private behaviour we are basically coupling our tests to the production code very tightly, because we are not testing public APIs anymore, we are testing internal implementation details we should not even be aware of.

Have you ever heard anyone complaining about how unit testing is a waste of time and effort because if you change the production code there is always going to be a bunch of tests that need to be rewritten? I bet the reason is because of vey tight coupling between tests and the code under testing. Or, in other words, because the tests are not testing why they should (public expectations) but private details about how those expectations are fulfilled.

Also, in the age of Swift, doing that is not even possible anymore. If MediaService is final, we can not subclass and override, and if cache and network are declared as constants with let there is no way to override them.

So, what to do?

Invert the dependencies

This is not the first time I blog about dependency injection. Once even using the same example I am using today, another time in a different context. And I am afraid it won’t be the last time I blog about it.

But that’s for a reason. Inverting the dependencies, in my experience, only has upsides. It makes code more decoupled, easier to test, and therefore easier to maintain in the long term.

In this case, the best solution would be something along these lines:

Cache and networking behaviours can be provide now through the MediaService initialiser. Notice the shift in the way we refer to caching and networking now: as behaviours. Because for MediaService, that’s what they are.

The details on how caching and networking work are not known to MediaService anymore. MediaService is provided the behaviours it needs to rely on, the behaviours it needs to collaborate with.

And that’s the beauty of inverting the dependencies. By carefully setting the way collaborators know about each other, and by carefully hiding whatever is not necessary for collaborators to know, and by shifting from concretions to abstractions we make each of our building blocks rely on the expectations that the other building blocks publish about their own behaviours.

Leave a Comment

The reason why I try to avoid storyboards

Sometimes you know that you like or dislike something, but you are not sure exactly why. Sometimes you find yourself in a unpleasant situation, but it is hard for you to pinpoint why you find it unpleasant.

Well, I just had an epiphany, and I think I finally understand why I don’t feel comfortable working with storyboards, for anything bigger than a two screens app.

Storyboards do not let me slice the problem.

Consciously, or most likely unconsciously, I tend to slice problems, and rearrange those slices in a way that suits me, or suits the context I am working on.

Each of those slices can then be sliced down even further, so those new slices can be rearranged again.

That helps me, first, make the problem solvable. Which, immediately, increases my productivity: when I am facing a problem that happens to be too big, or just seems to be too big, I tend to procrastinate a lot.

But by slicing problems as much as possible, trying to comply with the single responsibility principle, abstractions start popping up, screaming at me.

Because even though I would love to say that I can design a solution to a problem upfront, truth is I have embraced the fact, long time ago, that I am better at noticing when the problem I am trying to solve is suggesting me a solution, than at trying to make the problem fit my pre-conceived solution to it.

Also, I find easier to make small slices work faster. It might be the time spent doing TDD, it might be that it’s just the way my brain works, but the shorter the feedback cycles, the faster I can move forward. It is like I physically feel impediments removing themselves, getting out of my way. Taking a storyboard of a certain complexity to a stage where I can make it work takes so long!

So I guess we are back to a recurrent theme of this blog: the single responsibility principle.

A storyboard can be a time saver, a great solution to build a kick prototype or a product with a well defined, clean and simple navigation flow.

But, in my opinion, more often than not, it is just a big chunk of muddy of responsibilities than I’m better off breaking down.

1 Comment

This is how I parse JSON in Swift

There are great libraries to deal with JSON data in Swift. Just for the record, here are some of them, in no particular order:

Third party libraries are great. I am even guilty of releasing a couple of them myself, but the thing is that I believe pulling in a third party dependency should be carefully considered.

The advantages and disadvantages, the benefits and risks of relying on third party libraries are definitelly worth a separate post. But, for now, let me just say that I think relying on third party code is a commitment that needs to be very carefully considered. It is a dependency on code that is out of your control.

But back to the post at hand. Even though the existing third party JSON parsers and serializers are great, I tend to do all my JSON parsing manually.

Why? First of all, check the previous paragraph. I am not for reinventing the wheel at all, but so far, I haven’t found myself in any situation where relying on a third party library had any sensible advantages over rolling out my own parsing.

Secondly, I have always liked the NSCoding approach.

An object should know how to serialize and materialize itself. Now, when it comes to simple model objects (kind of a POJO), it could be argued that embedding that knowledge into an object is not the best possible idea. After all, the NSCoding approach implies that a model object would need to know the specifics of a format (JSON, XML, plist, or whatever the cool kids do at the moment), and that knowledge should not be part of the model object itself.

But Swift provides awesome tools to model this in a modular way. Like, for example, extensions. So we could still declare our model object as a simple entity, without any behaviour:

And then declare a failable initializer that creates an instance of BlogPost from JSON data in an extension:

The code looks clean enough to me, I am still complying to the single responsibility principle, and I still have good separation of concerns.

If I needed to materialize a BlogPost from a different format, say a plist file, all I would need to do would be add a new extension with a new failable initializer.

And that’s how I parse JSON nowadays.

Leave a Comment

TIL: UIViewController.init() might initialize a xib

It is all in the documentation, both in the UIViewController Class Reference, and in the comments in the class header. It also makes sense, but today I spent a couple of hours debugging a weird crash, due to this.

So this is something I think I won’t forget in a long time.

When initializing a view controller passing nil as a nib, the view controller will attempt to load a nib whose name is the as same as the view controller’s class. That’s what the UIViewController header clearly states:

If you
invoke this method with a nil nib name, then this class’ -loadView method will attempt to load a NIB whose
name is the same as your view controller’s class. If no such NIB in fact exists then you must either call
-setView: before -view is invoked, or override the -loadView method to set up your views programatically

But there is more. According to the UIViewController class reference:

If you use a nib file to store your view controller’€™s view, it is recommended that you specify that nib file explicitly when initializing your view controller. However, if you do not specify a nib name, and do not override the loadView method in your custom subclass, the view controller searches for a nib file using other means. Specifically, it looks for a nib file with an appropriate name (without the .nib extension) and loads that nib file whenever its view is requested. Specifically, it looks (in order) for a nib file with one of the following names:

If the view controller class name ends with the word ‘Controller’, as in MyViewController, it looks for a nib file whose name matches the class name without the word ‘€œController’, as in MyView.nib.

It looks for a nib file whose name matches the name of the view controller class. For example, if the class name is MyViewController, it looks for a MyViewController.nib file.

And that’s exactly what happened to me earlier today. I was initializing a view controller like this:

While also having in the project a completely unrelated xib, named AddContactView. Unrelated, like in “AddContactViewController does not use AddContactView.xib at all. Nothing. Zero.”

So my app was crashing, throwing an assertion that kindly reminded me that AddContactView did not have any owner properly setup.

Well, there is a reason for that.

Remember, read your docs! 💁

Leave a Comment

Swift extensions can be applied to Objective-C types

The more I think about this, the more obvious it sounds. And the more I think about it, the less I believe it deserves a post. But it was a pleasant surprise anyway, so here it goes.

Let’s assume we have an Objective-C project, that contains a class like this:

And it also contains another class like this:

Now, let’s say we declare a Swift protocol like this:

And extensions to the Person and Movie object to enforce compliance with the Displayable protocol:

Now, we can consume the Displayable protocol from Swift:

Or from Objective-C (please, excuse the weakness of my ObjC-generics kung fu):

Which, believe me, is a huge relief when dealing with large legacy projects.

In case you want to play with a complete example, I uploaded a sample project to GitHub.

Leave a Comment

Testing

I have been thinking about how to approach this post for almost a week, and in the end, I decided that I am going to do it with honesty. The following are my opinions, and I take full ownership of everything I am going to write. So, fair warning, long rant incoming.

I am sick and tired of hearing bullshit related to unit testing. I have a really hard time figuring out how any developer can be fine with delivering untested code, code that nobody knows if it really works until it is tested in production, with real users. I am sick and tired of hearing always the same arguments, the same excuses, justifying not writing unit tests. And I hate how the idea that unit testing is not indispensable has made its way into some management circles.

These are some of the excuses I have heard to not write unit tests.

We don’t have time to write unit tests.

Well, when you say that, what I hear is “I don’t know how to write unit tests”.

Why? Because of this:

Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation.

That’s the first result when googling “unit tests”. And I think it is a very good definition.

A unit test should test a small unit of behaviour. If you expect a class to return a specific value from an specific method when some particular conditions are met, all you have to do is test that you get the value you expect from the method you are interested in after recreating your particular preconditions. That is not that hard. That is not that time consuming.

And if it is, if testing the smallest unit of behaviour in your system is hard and time consuming, then your smallest unit of behaviour is not as small as you think. Back to my argument: you would be testing it wrong.

If I change the production code I will have to change my tests.

Well, obviously, if you know that the behaviour and the public API of a module are going to change tomorrow, first thing in the morning, it might not make too much sense to unit test that module today.

But… how many times will you find yourself in that situation? Not many.

Of course, a codebase is similar to a living and breathing organism, that evolves and might change its behaviour over time. Well, since you will have to refactor your production code, your will also have to refactor your testing code to reflect the new behaviour of the system.

But justifying not writing tests because you will refactor your code in the future does not make any sense. Will you also refuse writing code, today, because you might need to refactor that code in the future? I don’t think so.

Testing is difficult and requires too much work.

Well, sorry, but if testing your production code is very difficult, your production code is not properly designed.

You might be trying to test classes with too many responsibilities (violating the Single Responsibility Principle) you might be testing private behaviour, or you might be testing multiple layers at once (therefore, trying to write something that is not actually a unit test). Either way, when testing gets hard, that’s a clear sign that the design of the production code is wrong.

Or, to put it another way, well factored code, with small, highly cohesive and loosely coupled units, is easy to unit test.

Testing private behaviour is very hard.

This point is not too different from the previous one.

If a behaviour is private, it should not be tested. If you really really want to test it, it means it is not private. So, we are back to swore one: the code is badly designed.

Now, allow me to digress a little bit, and tell you a story. A couple years ago, my team spent an insane amount of time trying to test a class that was performing some network requests, and the cached the behaviour. Cache was supposed to expire after five minutes, if I recall correctly. While caches were valid, the class performing the networking would return cached values, when caches were invalid, it would trigger a new network request. The production code looked like this:

The team complained a lot about how difficult it was to test that behaviour, how difficult it was to mock the cache, and spent an insane amount of time researching mocking frameworks.

All we needed was turn to the Dependency Inversion Principle. Inject the dependency like this:

And now the behaviour of the cache could be tested in isolation. And the behaviour of the class doing the networking could be tested in isolation. And the collaboration between the both of them could be tested in isolation as well.

Testing the UI is impossible. How are you going to unit test a view controller?

What do you mean by “unit testing a view controller”? Or “unit testing an activity”? Because what I have found people usually mean by that is something along the lines of “how am I going to test that when I tap a button the text in a textfield is updated with new content dowloaded from a server”?

The UI layer should be thin. Very thin. It should only render data, and propagate down user interaction. Period. If it does something else, the design is wrong. Period.

Some people call it The Humble Object Pattern, but I don’t think we really need a name for it. The UI layer should not do any “business” logic. If the UI layer needs to coordinate state between UI elements, that logic should be extracted to different units, because of modularity, and encapsulation, and good separation of concerns, but also because as an added benefit, it will be easier to test.

By now, I am sure you see the pattern: well designed code is easy to test. But the opposite is even more accurate: poorly designed code is difficult to test. And the more difficult it is to test a particular unit of code, the poorer its design.

I can’t waste time! I need to ship!

Do you want to ship code that works? Or do you want to just toss a dice?

Did this post offended you? If that’s the case, to be honest, I don’t care.

1 Comment

What does a senior developer do?

This is a post I have been meaning to publish for months, but I am always hesitant to do so for multiple reasons.

Firstly, because giving advice to others about how they should do their job is not the most humble thing in the world, and secondly because I have learnt most of what I am going to discuss in this post from my own mistakes.

So let me start with an apology to all those developers that had to deal with me doing the actual opposite of what I am going to suggest now. You know who you are. Sorry, guys,

What is the difference between a senior and a junior software engineer?

Let’s begin by setting some common ground. What defines a developer as senior? This is my premise:

  • A senior developer has confidence.
  • A senior developer sees the big picture.

I have been lucky to work in three different countries, with three extremely different work cultures and ethics. But there is a trend I have seen everywhere, a significant misconception of what defines a developer as senior.

Obviously what makes a developer senior is not the number of years in the craft. There is a difference between having five years of experience, and having the same year of experience five times. And it hurts deeply when I see hiring managers that refuse to see it that way.

What makes a developer senior is not the amount of encyclopaedic knowledge a developer might have about a specific language, API or framework either.

The difference between a senior and a junior developer is confidence . A senior developer makes faster and better decisions. But not only because they have more experience, but because a senior developer is capable of seeing the big picture.

Now, it can be argued, with reason, that confidence can be a liability more than an asset. And I tend to agree with that as well.

So allow me to clarify: confidence is not refusing to be open to other people’s opinions, it is not thinking that you already have all the answers, it is not thinking that you are the smartest person in the room.

A confident developer will be open to fresh ideas, when not actively pursuing them, that challenge his current knowledge, and confident developers will always approach a technical discussion trying to find better solutions than their own to solve the problem at hand.

Confidence is also what makes senior developers want to share their knowledge and experience with others.

How to behave as a good senior developer

A senior develop must be humble and respectful, honest and transparent, and willing to challenge others and himself.

Humble and respectful

A senior developer would assume that his knowledge of certain areas of his daily practice is outdated. And senior developers can be sure that there will always be another developer in the team that knows more than them about one or more of those areas.

And you know what? That is great. Remember, this software thingy is not a pissing contest, this is about building great products, with code of the highest possible quality. A senior developer, a team/technical lead in particular, should be able to recognise when others know more, and encourage them to use that knowledge for the greater good (the final product).

Honest and transparent

We all learn by making mistakes. A senior developer has, most likely, reached that point in her/his career due to all the mistakes he/she has made in the past.

Also, due to the aforementioned confidence, a senior developer won’t be scared of making more mistakes, because the senior developer knows that what matters is not writing perfect code, but recovering from the inevitable mistakes fast and with elegance.

By being honest and transparent, but alway thinking out loud in the open, by discussing his approach with the rest of the team, by showing her/his thought process in the most possible honest and open way, the senior developer will help others understand that what matters is having an actual thought process, and that what matters is taking time to consider the trade-offs before starting to write code like crazy.

I believe there is nothing more valuable for a junior developer that seeing someone, with way more experience, struggle with the same issues that he/she struggles with.

A senior developer always attacks problems with an analytic approach, and a senior developer always has in mind those things that junior developers are usually not aware of yet: business requirements, trade-offs, and the big picture. And makes the importance of those constraints crystal clear.

Because a senior developer always sees the big picture. The senior developer always has a long term plan. The senior developer is relentlessly moving towards the goal posts, no matter what it takes.

Constructing that big picture, that master plan, and setting the placement of the goal posts openly, in front of everyone, rationalising and explaining every single decision is the best that the senior developer can do to mentor others.

Because mentoring is not lecturing, mentoring is not only suggesting using an obscure class in the system frameworks, mentoring is showing, with your own actions, day after day, that a master plan, and a thought process to fulfil it, are necessary.

Willing to challenge others and himself

A senior developer cares deeply about what he does, and cares deeply about the end result of his practice. That means a senior developer cares about excellence.

A senior developer has to be a living and breathing example of how excellence does not happen by accident. Because excellence requires work, dedication, commitment, and the will and drive to actually achieve it.

A senior developer will always challenge himself. A senior developer is never satisfied with the current state of the code, and is always willing to improve it in some way. And he is open, even vocal, about it.

A senior developer will also challenge junior developers by asking them challenging questions. There is nothing worse than telling a junior developer, straight away, that something he implemented is not good enough, or plain wrong.

However, there is nothing better than reading carefully a junior developer’s code and ask the kind of questions that will make that developer understand that the code can still be improved.

A senior developer will not provide direct solutions, but gently steer the junior developer towards those spots in the implementation that do not align with the big picture, the masterplan. Because, remember, the senior developer always has the masterplan in mind.

A senior developer will always end a code review with some open ended question or suggestion to the junior developer. A simple “do you think the functionality of this particular part of the system would be easy to extend?” makes wonders. A “do you think there is a way to make this code simpler”, accompanied by a “by the way, have you heard about the builder pattern?” will provide a great learning opportunity to another developer.

And a senior developer will carefully consider when to let others make mistakes. Because mistakes and the lessons learnt from them, are what made the senior developer, well, senior.

Note:
Edited on May 6th to (try to) remove some gender specific pronouns, as pointed out by @fbartho. My apologies if anyone felt this article was gender-specifc. I am not a native English speaker, and the best I can do as a senior developer is try to recover from my mistakes the best I can. If you still feel this post is gender-specific, please let me know.

3 Comments

Polymorphism and protocol extensions

Allow me to start using a big word: cyclomatic complexity

Cyclo-what? Illustrate that with an example!

Let me illustrate a typical example of cyclomatic complexity. Imagine you have some entities similar to these:

Now, imagine we need to render a table view containing Images, Movies, and TextMessages. Notice how each one of those entities provides a description of itself (the data that will need to be rendered) of a completely different type. In plain English, the table cells will need to display a UIImage if the object to render is of type Image, a NSURL if the object to render is of type Movie, and a String if the object to render is of type TextMessage.

So, the obvious implementation, the one that first comes to my mind, the one I have seen in a thousand different projects in the good old Objective-C days (yes, that was sarcasm), would be declaring a method typing the parameter as id, and then cast and check the type of whatever is passed. Translated to Swift, it would look like this:

Well, that was bad in the good old Objective-C days (sarcasm again), but now, in the bright Swift present, code like that is just unacceptable.

What wait, why is that bad?

Here is where the cyclo-thingy kicks in.

That method is very apparently very simple, but in reality is very complex. Or, to be more accurate, it has the potential to become very complex, very fast.

First, as soon as we start to create different logical branches, we are starting to add different behaviours to somethign that, in fact, is just one single behaviour (in this case, rendering data in a cell). Since we are not dealing with a single behaviour anymore, it is very likely that the behavious in each one of those logical branches will start diverging more and more as time goes by.

But that is not the only reason. As soon as we start creating different logical branches, and each branch starts behaving differenly, the code starts to become more difficult to understand, which makes it more difficult to maintain.

Also, when the code is more difficult to understand, specially because it does multiple, very different things, it is easier to make mistakes. Never understimate the importance of this. This, in my experience, is the first source of bugs in every single codebase I have ever touched.

It took me years to figure it out, but once I got it, I decided to make this my only non-negotiable rule when it comes to writing code:

The Single Resposibility Principle applies to every single task in the daily practice of a software engineer, from organizing code in files, to structs, classes, methods, functions, or any other construct.

Again, let me be clear about this: to me this rule is non-negotiable. Sowftare engineering is all about trade-offs except when it comes to the Single Responsibility Principle.

One file should contain only one abstraction (a struct, a class, an enumeration, a protocol, you name it), one method should only do one thing (either render a movie, or render an image, but never, ever, render a movie or an image), one class should do just one thing, one module should deal with only one high-level concern, and so on.

Now that we got that out of the way, let’s look again at the implementation of the Cell class, because there is another violation of the SOLID principles: the Cell class is not open for extension and closed for modification. If we want to render another type (say, Audio), and since Cell is already rendering three different tyes it wouldn’t be that surprising if we had to add yet another one, we would need to edit the source code of the Cell class. And we’d need to edit it to add yet more logical branches, more complexity.

In other words, we would need to edit to make even more brittle. Not good.

So, where do protocol extensions fit into all of this?

Well, protocol extensions make composing behaviours very easy.

Now, it could be argued that the solution I am going to discuss is not the only possible solution. And that would be true. And we have already discussed an alternative solution to a problem very similar to this, by relying on plain-old polymorphism. So, one thing we could do here would be having a bunch of Cells, all of them implementing a protocol, or all of them subclasses of a base Cell class. However, I believe there is a neater way to implement this.

So, let’s start again with the same entities:

Now, let’s declare a protocol, and declare the render method in it:

The next step would be declaring a Cell class:

Notice how this class is actually empty, does not provide any behaviour at the moment. Nowm let’s start composing behaviours in extensions and protocol extensions. The first extension would make the Cell class implement the Populable protocol, and provide a “default behaviour” for it:

To be honest, having to provide a “default behaviour” is the only part of this solution I don’t like. As you can see, I typed the parameter as Cell, assuming a cell won’t have to render itself, and that way, I am providing a defaul tbehaviour that I do not expect to be necessary. And, again, that makes me feel uncomfortable.

Anyway, now I can start providing the rendering behaviour that I expect from the Cell, in separate extensions, one for each type of data I want to render:

Notice how Swift’s type inferrence makes the final code clean of any extra type information: it just works:

Now, here is the actual beauty of this solution. If I want to render a new type, all I have to do is add an extension to the Populable protocol with the logc necessary to deal with that specific type:

And, as added bonus, Xcode, with the help of the Swift compiler, will provide me extra documentation, in the form of code hints, about what types can be rendered by an instance of Cell:

codeHint-supported-types

Recap!

In object-oriented programming, polymorphism (from the Greek meaning “having multiple forms”) is the characteristic of being able to assign a different meaning or usage to something in different contexts – specifically, to allow an entity such as a variable, a function, or an object to have more than one form.

By relying in strict typing, the single responsibility principle, favouring composition over inheritance, and using protocol extensions in our advantage, we can implement very robust solutions to complex problems, with code compliant with the Single Responsibility Principle, and therefore simple to read, understand, and maintain. And, as an added benefit, easy to test.

7 Comments