The Holy Java

Building the right thing, building it right, fast

Simulating network timeouts with toxiproxy

Posted by Jakub Holý on May 9, 2017

Goal: Simulate how a Node.js application reacts to timeouts.

Solution: Use toxiproxy and its timeout “toxic” with the value of 0, i.e. the connection won’t close, and data will be delayed until the toxic is removed.

The steps:

  1. Start toxiproxy, exposing the port 6666 that we intend to use as localhost:6666:
docker pull shopify/toxiproxy
docker run --name=toxiproxy --rm --expose 6666 -p 6666:6666 -it shopify/toxiproxy

(If I was on Linux and not OSX then I could use --net=host and wouldn’t need to expose and/or map the port.)

  1. Tell toxiproxy to serve request att 6666  via an upstream service:
docker exec -it toxiproxy /bin/sh
/ # cd /go/bin/
/go/bin # ./toxiproxy-cli create upstream -l 0.0.0.0:6666 -u google.com:443
  1. Modify your code to access the local port 6666 and test that everything works.

Since we want to access Google via HTTPS, we would get a certificate error when accessing it via localhost:6666 so we will add an alias to our local s /etc/hosts:

127.0.0.1 proxied.google.com

and use
https://proxied.google.com:6666 in our connecting code (instead of the https://google.com:443 we had there before). Verify that it works and the code gets a response as expected.

  1. Tell toxiproxy to have an infinite timeout for this service

Continuing our toxiproxy configuration from step 2:

./toxiproxy-cli toxic add -t timeout -a timeout=0 upstream

(Alternatively, e.g. timeout=100; then the connection will be closed after 100 ms.)

  1. Trigger your code again. You should get a timeout now.

Tip: You can simulate the service being down via disabling the proxy:

./toxiproxy-cli toggle upstream
Advertisements

Posted in Tools, Uncategorized | Tagged: , , | Comments Off on Simulating network timeouts with toxiproxy

Demonstration: Applying the Parallel Change technique to change code in small, safe steps

Posted by Jakub Holý on February 3, 2017

The Parallel Change technique is intended to make it possible to change code in a small, save steps by first adding the new way of doing things (without breaking the old one; “expand”), then switching over to the new way (“migrate”), and finally removing the old way (“contract”, i.e. make smaller). Here is an example of it applied in practice to refactor code producing a large JSON that contains a dictionary of addresses at one place and refers to them by their keys at other places. The goal is to rename the key. (We can’t use simple search & replace for reasons.)

Read the rest of this entry »

Posted in General | Tagged: , | Comments Off on Demonstration: Applying the Parallel Change technique to change code in small, safe steps

It Is OK to Require Your Team-mates to Have Particular Domain/Technical Knowledge

Posted by Jakub Holý on March 6, 2016

Should we write stupid code that is easy to understand for newcomers? It seems as a good thing to do. But it is the wrong thing to optimise for because it is a rare case. Most of the time you will be working with people experienced in the code base. And if there is a new member, you should not just throw her into the water and expect her to learn and understand everything on her own. It is better to optimise for the common case, i.e. people that are up to speed. It is thus OK to expect and require that the developers have certain domain and technical knowledge. And spend resources to ensure that is the case with new members. Simply put, you should not dumb down your code to match the common knowledge but elevate new team mates to the baseline that you defined for your product (based on your domain, the expected level of experience and dedication etc.).

Read the rest of this entry »

Posted in SW development | Tagged: | Comments Off on It Is OK to Require Your Team-mates to Have Particular Domain/Technical Knowledge

Don’t add unnecessary checks to your code, pretty please!

Posted by Jakub Holý on March 4, 2016

Defensive programming suggests that we should add various checks to our code to ensure the presence and proper shape and type of data. But there is one important rule – only add a check if you know that thing can really happen. Don’t add random checks just to be sure – because you are misleading the next developer.

Read the rest of this entry »

Posted in SW development | Tagged: , | 1 Comment »

2015 in review

Posted by Jakub Holý on February 19, 2016

The WordPress.com stats helper monkeys prepared a 2015 annual report for this blog.

Here’s an excerpt:

The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 200,000 times in 2015. If it were an exhibit at the Louvre Museum, it would take about 9 days for that many people to see it.

Click here to see the complete report.

Posted in Uncategorized | Comments Off on 2015 in review

A Costly Failure to Design for Performance and Robustness

Posted by Jakub Holý on December 6, 2015

I have learned that it is costly to not prioritise expressing one’s design concerns and ideas early. As a result, we have a shopping cart that is noticeably slow, goes down whenever the backend experiences problems, and is a potential performance bottleneck. Let’s have a look at the problem, the actual and my ideal designs, and their pros and cons.

We have added shopping cart functionality to our web shop, using a backend service to provide most of the functionality and to hold the state. The design focus was on simplicity – the front-end is stateless, any change to the cart is sent to the backend and the current content of the cart is always fetched anew from it to avoid the complexity of maintaining and syncing state at two places. Even though the backend wasn’t design for the actual front-end needs, we work around it. The front-end doesn’t need to do much work and it is thus a success in this regard.

Read the rest of this entry »

Posted in SW development | Tagged: , , , | Comments Off on A Costly Failure to Design for Performance and Robustness

Why we practice fronted-first design (instead of API-first)

Posted by Jakub Holý on December 6, 2015

Cross-posted from the TeliaSonera tech blog

Alex has introduced us to the idea of front-end first design: You start by creating the front-end (browser) code. As you discover data or API calls that you need, you mock them. When the UI stabilizes, you use the mocked APIs and data to create the backend with exactly the functionality and exactly the data needed by the UI. The end result is a simpler application.

We are trying to adopt this as our approach because it is so sensible. Whenever we work with an API that wasn’t designed with the actual client needs in mind, we experience unnecessary friction and have to do various workarounds and adaptations so front-end-first absolutely makes sense to us. (E.g. when working with a REST API designed in line with REST principles – but not with our needs, resulting in a too chatty communication and more complex code.)

Of course there are same limitations. It is more challenging when you need to support different clients. And you need to take into account not just what the UI wants but also what is reasonably possible in the constraints of the existing system. You want to avoid a big gap between the two – we still remember the pain of integrating OOP and relational databases and the complexity of pitfalls of Object-Relational Mappers such as Hibernate, that try to bridge the two.

Conclusion

Fronted-first design rocks (for us). Try it too and see whether you too get a simpler application code and shorter time to market.

Posted in SW development | Tagged: , | Comments Off on Why we practice fronted-first design (instead of API-first)

Troubleshooting And Improving HTTPS/TLS Connection Performance

Posted by Jakub Holý on November 27, 2015

Our team has struggled with slow calls to the back-end, resulting in unpleasant, user-perceivable delays. While a direct (HTTP) call to a backend REST service took around 50ms, our median time was around 300ms (while using HTTPS and a proxy between us and the service).

We have just decreased that time to median of 80ms by making sure to keep the connections alive and reusing them, which in Node.js can be achieved via using an https.agent and setting its keepAlive: true (see the Node TLS documentation).

PayPal has a couple of additional useful tips in their 4/2014 post Outbound SSL Performance in Node.js, mainly:

  • Disable expensive SSL ciphers (if you don’t need their strength)
  • Enable SSL session resume, if supported by the server, for shorter handshakes – the StrongLoop post “How-to Improve Node.js HTTPS Server Performance” explains how to enable SSL session resume
  • Keep Alive

The article SSL handshake latency and HTTPS optimizations (via Victor Danell) explains the ± 3.5* higher cost of SSL due to the 3 roundtrips need for the handshake (+ key generation time) and shows how to use curl to time connections and their SSL parts, as well as how to use OpenSSL and Tcpdump to learn even more about it.

See also IsTlsFastYet.com for a lot of valuable information, benchmarks etc.

Tools

(See the articles linked to above for examples)

  • curl
  • openssl s_client
  • pathchar by the traceroute author, intended to help to “find the bandwidth, delay, average queue and loss rate of every hop between any source & destination”; there is also pchar, based on it

 

Posted in General | Tagged: , , | Comments Off on Troubleshooting And Improving HTTPS/TLS Connection Performance

Moving Too Fast For UX? Genuine Needs, Wrong Solutions

Posted by Jakub Holý on November 12, 2015

Cross-posted from the TeliaSonera tech blog

Our UX designer and interaction specialist – a wonderful guy – has shocked us today by telling us that we (the developers) are moving too fast. He needs more time to do proper user experience and interface design – talk to real users, collect feedback, design based on data, not just hypotheses and gut feeling. To do this, he needs us to slow down.

We see a common human “mistake” here: where the expression of a genuine need gets mixed in with a suggestion for satisfying it. We are happy to learn about the need and will do our best to satisfy it (after all, we want everybody to be happy, and we too love evidence-based design) but we want to challenge the proposed solution. There is never just one way to satisfy a need – and the first proposed solution is rarely the best one (not mentioning that this particular one goes against the needs of us, the developers).

Read the rest of this entry »

Posted in SW development | Tagged: , , | 1 Comment »

Upgrade or not to upgrade dependencies? The eternal dilemma

Posted by Jakub Holý on October 20, 2015

Cross-posted from TeliaSonera Tech blog.

Handling dependencies is one of important challenges in any software project – and especially in the fast-moving JavaScript world. Our Nettbutikk team just had a heated discussion about handling upgrades of our dependencies that continuous our learning journey lined with failures (or rather “experiments that generated new knowledge” :-)).

Failed attempt one: Let tools do it

Originally we let npm automatically do minor upgrades but that turned out to be problematic as even minor version changes can introduce bugs and having potentially different (minor) versions on our different machines and in production makes troubleshooting difficult.

Read the rest of this entry »

Posted in SW development | Tagged: , , | Comments Off on Upgrade or not to upgrade dependencies? The eternal dilemma