Archive for the ‘Tech’ Category

Why does Mac VoiceOver keep saying the word “simul”?

Saturday, January 12th, 2019 | Tech

I’m currently working with a client to improve the accessibility of their website for visually impaired users. This has involved a lot of time working with screen readers. As part of that, I have found a rather weird bug with Mac’s VoiceOver. It keeps saying the word “simul”.

Which isn’t a word. Maybe it’s saying simmul or simmel, or something else. None of these are words.

It happens when we give it a range to read. Something like “4-6”. The screen reader says the first number, then goes suspiciously quiet and says simul, and then starts building back up to regular volume as it gets to the final number.

I even asked about it on Stack Overflow, and everyone else was confused, too.

I wondered whether it might be a language issue. So, I tried adding a custom pronunciation, and double-checked the HTML tag had a lang attribute set to en-gb. Alas, no luck.

This is only a problem on Mac: TalkBack on Android works fine, for example.

In the end, I was able to get it to read correctly by changing the voice. By default, macOS comes with Daniel Compact set as the voice. However, when I switched to Daniel, Kate, or Kate Compact, it read it out correctly.

In a way, this is frustrating, because there is no much we can do to fix it. It’s a bug with the voice in Mac. But it is at least somewhat comforting to know that I wasn’t making some obviously silly mistake.

Scaling Scrum to a 30 person team

Friday, January 11th, 2019 | Business & Marketing, Tech

What do you do if you need to scale your Scrum team? Ideally, have multiple teams and use one of the many fine methods for scaling with multiple teams. But what if you want to scale a single team? To say, 30 people?

This was the situation I ran into with a recent client. They had an important project and lots of money to throw at it, and they wanted it all to be one team.

You might think “but there is no way that could possibly work”. And you would be correct. It didn’t work that well. But, having no other option, we did find some hacks that made it easier. I’ll present these below.

Kim’s Corners

Doing a stand-up with 30 people is tough. You might think it took ages. But it didn’t. We got done in 15 minutes. There were so many people (in a special meeting room we had to book every day) that people kept it short and sweet. From that point of view, it was a good learning experience.

But it wasn’t useful. There was so much stuff going on that nobody could remember what everyone else has said. Most people did not even try. They just tuned out for most of it.

So, we moved to Kim’s Corners. Each workstream had a corner and we went around one corner at a time. The people in that corner listened to each other intently, while only taking a high-level overview of what the other corners said.

Goldfish Bowl

Having a retro was also challenging because there were so many people wanting to weight in. To solve this, we used the Goldfish Bowl technique.

This involves having five chairs in the middle of the room. Four people sit on them, with one empty chair. Everyone else sits around in a big circle. Only the people in the inner chairs are allowed to talk on the topic at hand, and the discussions are time-boxed to five minutes. The group can vote to allow another five minutes if required.

What if you are sat on the outside? You go into the circle and claim the empty chair. At which point, someone from the inner circle is obliged to get up and go back to the outer circle, freeing up a chair to be the new empty chair. Anyone who has a strong opinion can take a chair, but without too many people talking at once.

Refinement Lucky Dip

30 people were too many people to have sat around looking at a Jira board and pointing stories up. So, we used a lucky dip system in which five people were randomly selected to attend backlog refinement sessions.

Anyone else that particularly wanted to be involved, perhaps because they had the a specific knowledge or interest in a piece of work that was upcoming, was also welcome to attend. But they were not required or expected to attend otherwise.

No-Release methodology

Wednesday, November 7th, 2018 | Tech

You’ve no doubt heard of us at Glorry, the exciting Silicon Valley startup that is taking the world by storm. We’re best known for raising £17 billion in funding on Kickstart in less than 38 minutes, despite having no discernable business model. Still, that’s what they said about Instagram and look who is laughing now. Mark Zuckerberg, that’s who.

We’re pushing the limits of Agile delivery to see how we can deliver the most value to our customers. But our Service Delivery team are also looking to ensure a consistent and stable customer experience that doesn’t allow new features to compromise on quality.

The result is a methodology we’ve called “No-Release” and I’m excited to share some details of it with you today.

What is No-Release?

Simply put, we don’t release any code.

What are the results like?

They’ve been outstanding. Since we adopted this approach, we’ve had zero bugs introduced to the live system. That’s not a misprint: zero bugs. Not one single incident has been released related to the new code we’ve been writing.

Since adopting this approach, our velocity has increased. Developers feel more confident that their work will not cause issues in live. Product Owners are happy to prioritise tech debt because they know it won’t delay new features arriving in live. Service Delivery is less jittery about degradation due to changes to the product.

How does it work?

We based our workflow on a traditional Scrum methodology. We operate in two-week sprints with a backlog of features prioritised by the Product Owner. Each ticket begins with a Business Analyst sitting down with a Developer and a Tester to work out how we can deliver and test the acceptance criteria of the ticket.

When a ticket is complete and signed off, including going through our continuous integration pipeline where a series of automated tests are run, we then merge the ticket into our develop branch. At this point, the ticket reaches our Definition of Done and we can close it.

Our master branch contains a copy of the code deployed to live, while our develop branch contains all of the new features. Because we operate under No-Release, we almost never have to deal with merge conflicts because we never merge develop into master. Or anything else for that matter.

What are the drawbacks?

One of the biggest drawbacks to No-Release is that you do not release any code. This means that no new features and improvements ever make it to the end user.

Making this work requires buy-in across the organisation. Without everyone being on board you can easily get developers saying “this is pointless, what am I doing here” every stand-up, and upper management suggesting they can fire the entire team and get the same results for much less money. Therefore, it’s important to get everything to embrace the methodology before starting.

Each organisation needs to make its own decision as to whether this drawback is acceptable to gain the benefits discussed above.

Conclusion

No-Release methodology allows you to increase your development velocity while eliminating any risk of service disruption to the end user.

How I optimised Leeds Anxiety Clinic

Monday, October 29th, 2018 | Tech

We’re taking the lean startup approach with Leeds Anxiety Clinic and trying not to build anything unless we absolutely need it. As a result, when I originally built our website is was functional but not particularly fast.

Now that we’re up and running and have clients coming through the door, I’ve been back over the site to make it faster and better. Below, I’ve detailed what I’ve done. Here’s a before and after using the Lighthouse audit tool:

Turn cache headers on

There were no cache headers on our images, CSS or JavaScript. Part of this was that I was still making changes to the JavaScript and didn’t have any cache-busting functionality in the site yet. Now that I do, I could safely let the browser cache everything for a month.

Replacing jQuery

jQuery is a library whose time has been and gone. But it does make it super easy to throw in some functionality. Now that I have a proper JavaScript setup, however, and as jQuery was mostly just animating things, I replaced it with native CSS animations and vanilla JS.

Compressing the JavaScript

As there was no JavaScript preprocessing going on, it was not compressed. Ironically, this hasn’t made it any smaller because I’ve now got the Webpack bootstrapping in the file. However, it does mean I can easily load in additional modules, which I discuss below, to help with other areas of the site.

Gzip compression

This is a super-easy win because all you have to do is put it in your Apache config and the server does all of the rest.

Async loading of web fonts

We had a total of three blocking font calls in the header of the page. All of this has now gone. I’m using webfontloader to load in the two variations of Lato that we are using.

Fontawesome is used for icons and is loaded in using a classic link tag, however, I’ve moved this link tag to the bottom of the page so that the initial content can be loaded first. On slow connections, this means the icons are missing for a fraction of a second when you load the page but I think it is worth it.

If I was looking to optimise further like I do with Worfolk Anxiety, I would select the individual icons I want, base64 encode them and put them directly in the CSS. But that seems overkill here for the moment.

Finally, I’ve set the font-display CSS property to fallback so that if the fonts are slow in loading, the text will be rendered away using a system font.

Webp images

Oh my god, webp images are so good. They’re like half the size of the already optimised JPEGs and PNGs that they are replacing.

Unfortunately, few browsers support them yet. It’s basically just Chrome (on desktop and Android). So, I’m using the picture tag with a fallback, as everyone does. I can’t wait until webp gets wider adoption.

Unfortunately, there is no way to do a safe fallback in CSS so my background images remain old JPEGs for everyone.

We now do wearables, too

Thursday, January 25th, 2018 | Limited, News, Tech

Worfolk Limited has been producing awesome software for many years. Whether we are building web applications and mobile apps for customers or launching them ourselves, I take a lot of pride in making them the best apps they can be, both from a user’s perspective and by leaving the client in the best position going forward.

That quality and attention to detail is now expanding to wearable devices, too.

This starts with Garmin devices, and I’m pleased to announce we’ve launched our first app, Mindful Moments. It gives you timely reminders to live in the present. If you have any of the Garmin watches that can download apps from the Garmin IQ Store (Forerunner 230+, Fenix, Vivo), you can try it for yourself.

It’s written in Monkey C, the version of Java that Garmin devices run on. Going forward, we’ll be developing more apps and making these services available to clients, too.

Why use continuous delivery?

Monday, October 23rd, 2017 | Tech

As a software consultant, I spent a lot of time going into big, slow-moving organisations with legacy software and helping them sort it out. One persistent feature of these organisations is regular but infrequent releases of their platform and a fear of moving to anything more rapid.

By infrequent, I mean they might release a few times a week (Tuesday and Thursday are fairly common), or maybe each weekday, or maybe even just once a week. In the world of agile, all of these schedules are infrequent. Modern, agile platforms release constantly.

Typically, these companies will be afraid to move to anything more agile because they have a system in place and they think that it works. They say things like “we can’t risk continuous delivery (CD), people depend on our platform”.

This, in my view, is a mistake. And in this post, I am going to set out the reasons why it is safer to use CD. Not why it is better for the product owners, makes more money and keeps your developers happy, though those are all good reasons. I will make this case purely from the view of change management and their worry that it will damage the integrity of their system.

Big bang releases do not work

The old model involves people from every team merging their code into a release branch, that branch being put on a staging environment and then manual tests being run against this.

This is a terrible way to do things. As everyone merges in their code you get conflicts. Some of which will be resolved correctly, some will not.

The changes interfere with each other in ways that you cannot predict and there is just too much ground for the manual test engineers to cover.

Worse, when everything does break because you have pushed 20 features live at the same time, it is then really difficult to do anything about it because you have to check whether you can roll back, then check whether there is anything critical that needs to go out, then do a fix branch or a new release branch and rush through the whole process again.

And it produces a huge number of incidents. If you have zero incidents right now, you have a good system. But does anybody have that?

It created an automated testing culture

Such companies often say “we will move to CD when we have 100% automated test coverage”. But this is an unrealistic standard because they do not have 100% manual test coverage now.

Worse, because people rely on the manual test engineers to do the regression test, they don’t bother to put in place the correct level of automation. Maybe someday there will be a company that magically finds out how to do that. But nobody I have seen has so far.

The only way to force your engineers to do it is to move to a CD model and let them know that if they don’t put the automation in place there is no safety net and it will be traced back to them.

You don’t get features interfering with each other

Under the CD model, you release one feature at a time. So, gone are the days when two changes are merged in and don’t play nicely with each other. Each change goes out separately having passed all of the tests.

Critical features don’t get blocked

Sometimes, you have to push something out that is really important.

Under the traditional model, this is a major issue. Either you push it out as part of the scheduled release, and risk another feature breaking and you having to rollback your critical change. Or you block out the entire release and stop everyone else releasing for a few days. Which, as you can imagine, creates an even bigger big bang release later down the line.

These problems are eliminated with the one-feature-at-a-time CD model.

It is easier to roll back

If you do get a problem, it is super easy to roll back. You just hit the rollback button.

Under the traditional model, you have to check if you can roll back (due to all of the dependencies) and then if you are allowed to roll back (checking with the product owners that they are all okay with it) and then do some complicated rollback script.

All of this is simplified under a one-feature-at-a-time CD model where if it doesn’t work, you just roll it back straight away and don’t block anything else from releasing their features.

You can get fixes out faster

If something does slip through the net, you can get a fix out of the door faster than ever before. Gone are the days when you make the fix, try to work out what release branch it needs to go in, do all of your manual testing and then push it out the door.

Instead, you just write the fix and release it. And it’s fixed, way faster than it could be using the traditional model.

Summary

Yes, continuous delivery will make for happier product owners, happier developers and a faster-moving business.

But, and this is most important of all, it will also make your platform safer and more reliable. People think it will make things riskier, but, as I have outlined above, this is simply not the case.

With the CD model, you isolate every feature and every release, which is the gold standard of good change management. And, if anything does go wrong, it’s easier than ever to rollback and push a fix out.

Companies often believe that they cannot risk moving to a continious delivery model. However, if their platform truly is important, then they cannot risk not moving to the CD model.

NHS Beta homepage

Saturday, September 23rd, 2017 | Tech

I’ve been working with the NHS to deliver a new homepage for the site that will replace NHS Choices. At the Health Expo this month, we launched the new homepage on the NHS’s beta platform. You can check it out here.

Four tools to make your website more accessible

Friday, August 25th, 2017 | Tech

Making your website accessible people who are visually impaired isn’t sexy or glamorous. But it is pretty easy. And given how prevalent visual impairment is, especially among the elderly (which, all e-commerce operators should note are the people with all the money), it is time well spent.

Here are four tools that will help you tune up your website.

W3 HTML validator

Assistive technology is already out there helping people. All you have to do is provide it with the correct input. And that starts with following HTML standards. And, where possible, using semantic HTML5 tags.

These work in everything except Internet Explorer 8 and the number of users who make use IE8 is now lower than the percentage of people with visual impairment. Plus, it’s very easy to add backward compatibility in.

Once you have done this, run it through W3’s HTML validator tool. This will check that your code makes sense and so everyone’s browsers (visually impaired or not) will be able to read it correctly.

Click here to go to the W3 Validator.

WAVE

WAVE stands for the Web Accessibility Evaluation Tool. It’s an online tool that has been running since 2001 and is considered one of the best ways to test how accessible your website is.

All you have to do is enter the URL of your website and WAVE will give you a full report, including helpful suggestions and things to fix. You get a copy of your page highlighted with the information to make it easy to find.

Like everything on this list, it’s free.

Click here to access the tool.

a11y.css

This is a bookmarklet that scans your page for problems. If you are not familiar with a bookmarklet, it is a magic bookmark: you save it to your favourites and then when you click it, it will run the report on the current web page you are browsing.

It highlights areas of the page with possible errors that you can then review. It’s quick and simple to use but doesn’t offer as much depth as WAVE.

Click here to check it out.

MDN documentation

The Mozilla Developer Network is the de facto authority on how HTML works. This includes documentation on the ARIA standard, which is a standard designed to make web applications more accessible.

Even Mozilla’s documentation is rather hard to penetrate, but if you bare with it, you can get your head around it.

Click here to read the ARIA documentation.

Summary

Making your website accessible is pretty easy: it’s all about following standards and best practice, and maybe adding a few HTML attributes if you have code doing fancy things.

Doing so makes your website much easier to access for the visually impaired, which will mean a better world for them and more traffic for you.

AppSpotr review

Tuesday, June 20th, 2017 | Tech

AppSpotr is a cloud-based service that allows you to make your own apps for iOS and Android.

I had a very brief play around with it so I won’t pretend this is anything like an in-depth review. It allows you to create apps using a drag-and-drop editor. You can add a number of different pages to the app, the basic ones are free and then there is a monthly price for the rest of them.

So, for example, if you want to add a form to capture people’s details, that costs $5 per month. Or the enhanced content pages which you need to add videos costs $1 per month.

It seems like a useful service if you are, for example, a restaurant or hotel that needs a little app with a simple menu and some content pages. But, for anything more advanced, it probably will not provide you with what you need. There is no logical, for example, it is just a list of pages.

You also need a developer account with whatever platform you want to publish to.

Grammarly weekly report

Wednesday, June 14th, 2017 | Tech

Either I have become the most prolific writer of all time, or Grammarly’s numbers are incorrect.

According to my weekly report, I checked over half a million words last week. Now, I do write quite a lot. And it picks up the spell checking I deliberately do for my articles, as well as most of the content I write in online forms.

However, I am pretty sure I did not make my way through over 600,000 words.

One explanation is that the numbers are simply incorrect.

Another is that the Grammarly for Mac app isn’t great: it freaks out when it loses internet connection and you have to reload the page. It could be repeatedly sending everything back to its server for checking.

Or, I’m sleep writing.