CircleCI has recently published a very useful post “Why we’re no longer using Core.typed” that raises some important concerns w.r.t. Typed Clojure that in their particular case led to the cost overweighting the benefits. CircleCI has a long and positive relation to Ambrose Bonnaire-Sergeant, the main author of core.typed, that has addressed their concerns in his recent Strange Loop talk “Typed Clojure: From Optional to Gradual Typing” (gradual typing is also explained in his 6/2015 blog post “Gradual typing for Clojure“). For the sake of searchability and those of us who prefer text to video, I would like to summarise the main points from the response (spiced with some thoughts of my own).
Posts Tagged ‘clojure’
Posted by Jakub Holý on October 6, 2015
Posted by Jakub Holý on October 6, 2015
While refactoring a relatively simple Clojure code to use a map instead of a vector, I have wasted perhaps a few hours due to essentially type errors. I want to share the experience and my thoughts about possible solutions since I encounter this problem quite often. I should mention that it is quite likely that it is more a problem (an opportunity? :-)) with me rather than the language, namely with the way I write and (not) test it.
The core of the problem is that I write chains of transformations based on my sometimes flawed idea of what data I have at each stage. The challenge is that I cannot see what the data is and have to maintain a mental model while writing the code, and I suck at it. Evaluating the code in the REPL as I develop it helps somewhat but only when writing it – not when I decide to refactor it.
Posted by Jakub Holý on November 3, 2014
My consise highlights from CodeMesh 2014.
Philip Potter has very good CodeMesh notes as well, as usually.
TODO: Check out the papers mentioned in the NoSQL is Dead talk. (<- slides)
Tutorial: QuickCheck (John Hughes)
- QC => Less code, more bugs found
- QC tests are based on models of the system under test – with some kind of a simple/simplified state, legal commands, their preconditions, postconditions, and how they impact the state. The model is typically much smaller and simpler than the imple code.
- QuickCheck CI (quickcheck-ci.com) – free on-line service for running CI tests for a GitHub project. Pros: You don’t need QC/Erlang locally to play with it, it provides history of tests (so you never loose a failed test case), it shows test coverage also for failed tests so you see which code you can ignore when looking for the cause.
- See John’s GitHub repo with examples – https://github.com/rjmh/
Shrinking (a.k.a. simplification)
- Doesn’t just make the example shorter by leaving things out by tries a number of strategies to simplify the exmple, typically defined by the corresponding generators – f.ex. numbers are simplified to 0, lists to earlier elements (as in “(elements [3, 4, 5])”) etc.
- You may implement your own shrinking strategies. Ex.: Replace a command with “sleep(some delay)” – so that we trigger errors due to timeouts. (A noop that just waits for a while is simpler than any op).
- Run QC; assuming a test failed:
- Instead of diving into the implementation, use first QC to check your hypothesis of what constitutes “bad input” by excluding the presumed bad cases – f.ex. “it fails when input has 8 characters” => exclude tests with 8 and rerun, if you find new failures you know the hypothesis doesn’t cover all problems – and you will perhaps refine it to “fails when it has a multiple of 8 chars etc. We thus learn more about the wrong behavior and its bounds. Assumption we want to verify: No (other) tests will fail.
- Do the opposite – focus on the problem, i.e. modify the test case to produce only “bad cases”. Assumption we want to verify: all tests will fail.
QC vs. example-based testing
QC code tends to be 3-6* smaller than implementation (partly thanks to the consiseness of Erlang) and fairly simple.
The case of Vovlo: 3k pages of specs, 20 kLOC QC, 1M LOC C implementations, found 200 bugs and 100 problems (contradictions, unclarities) in the specs. It took 2-3 years of working on it on and off.
Erlang dets storage race conditions: 100 LOC QC, 6 kLOC Erlang impl.
Testing stateful stuff
Invoke dfferent API calls (“commands”) until one of presumabely legal calls fails due to an accumulated corrupted state. This is an iterative process where we evolve our model of the system – commands, their preconditions (when they can be legally invoked), postconditions, and our repreentation of the state.
Ex.: Testing of a circular queue. Commads: push (legal on non-full queue), get (legal on non-empty), create new => generates sequences of new and pushs and gets.
Testing race conditions
Precondition: Run on a multicore PC or control the process scheduler.
- There are many possible correct results (valid interleavings) of parallel actions => impractical to enumerate and thus to test with example-based tests
- Correct result is such that we can order (interleave) the concurrently executed actions such that we get a sequential execution yielding the same result. F.ex. an incorrect implementation of a sequence number generator could return the same number to two concurrent calls – which is not possible if the calls were done sequentially.
Testing data structures
Map the DS to a simpler one and use that as the model – f.ex. a list for a tree (provided there is a to_list function for the tree).
Tutorial: Typed Clojure (Ambrose Bonnaire-Sergeant)
Note: The documentation (primarily introductory one) could be better
- typedclojure.org and typedclojure@GitHub
- core.typed-example (currently little outdated but in the process of updating)
- Prismatic Schema vs. typed.clojure by Ambrose – pros and cons of both
- lein-typed – plugin to check your code (
lein typed check)
- separately: (ann ..)
- around: wrap in (ann-form <defn> <type def.>)
- inside: use t/fn, t/defprotocol etc.
Gradual introduction of typed.clojure
- wrap everything in (t/tc-ignore …)
- for unchecked fns you depend on, add (ann ^:no-check somefn […])
- If you stare at a type error, consider using contracts (prismatic/Schema or pre/post conds etc.)
- f.ex. Cursive, Vim (,…?) have support for clojure.typed – show type errors in the source ….
- core.typed has # deps => don’t use in prod – see https://github.com/typedclojure/core.typed-example/blob/master/project.clj
Keynote: Complexity (Jessica Kerr, Dan North)
- Always have tasks of all three types: research (=> surface conplexity), kaizen (cont. improvement, improvement of the imprv. process), coding – these 3 interlave the whole time
- A team needs skills in a number of areas, it isn’t just coding – evaluation of biz valuedelivered, monitoring, programming, testing, deployment, DB, FS, networks, … .
Keynote: Tiny (Chad Fowler)
Keep things tiny to be efficient (tiny changes, tiny teams, tiny projects, …).
- Research by armies and in SW dev [TODO: find the two slides / qsm, Scrum] shows that teams of max 5-6 work best
- Teams of 7+ take considerably more time and thus money (5* more wrt. one study) to complete the same thing
- => small, autonomous teams with separate responsabilities (decomosition, SRP etc. FTW!)
- Human capacity to deal with others is limited – one company creates a new department whenever size exceeds 100
- Big projects fail; Standish CHAOS report – only cca 10% larger projects succeed compared to nearly 80% of small ones (summed together: 39% succeed)
- Note: 1 month is not a short iteration
Distributed Programming (Reid Draper)
RPC is broken
– it tries to pretend a remote call is same as local but:
- what if the call never returns?
- the connection breaks? (has the code been executed or not yet?)
- what about serialization of arguments (file handles, DB conn.,…)
It ignores the special character of a remote code and the 8 fallacies of distributed progr.
is batter than RPC. There is also less coupling as the receiver itself decides what code to call for a specific message.
From its page (highlight mine):
Traditional languages like Java and C are based on the von Neumann model, where a program counter steps through individual instructions in order. Distributed systems don’t work like that. Much of the pain in traditional distributed programming comes from this mismatch: programmers are expected to bridge from an ordered programming model into a disordered reality that executes their code. Bloom was designed to match–and exploit–the disorderly reality of distributed systems. Bloom programmers write programs made up of unordered collections of statements, and are given constructs to impose order when needed.
Correctness testing of concurrent stuff
- Unit testing unsuitable – there are just too many combinations of correct results and can only test the cases the dev can think of
- => generate the tests – property-based testing / QuickCheck
- PULSE – an addon to property-based testing that tries to trigger concurrency problems by using a scheduler that tries different interleavings of actions (randomly but repeatedly) [Erlang]
- Simulation testnng – Simulant
Beware the effects of GC, page cache, cronjob (e.g. concurrently running backup), SW updates => running a simple load test for few mins is not enough.
Cheats & Liars: The Martial Art of Protocol Design (Pieter Hintjens)
Pieter is the brain behind AMQP, ZeroMQ and EdgeNet (protocols for anonymous, secure, peer-to-peer internet). He has shared great insights into designing good protocols, the dirty business surrounding protocols and standardization, and troll-proof organization of communities (as a self-roganizing, distributed team).
- Protocol is a contract for working together
- IT should be minimalistic and specific, name the participants, …
- Protocols and their strandardization are prey to “psychopatic” organizations that want to hijack them for their own profit (by pushing changes that benefit them, taking over the standardization process, …) (Pieter has experienced it e.g. with AMQP; these trolls always show up). It’s advantegous to take control of a successful protocol so that you can make money off it or build stuff on it and sell that. Examples:
- Microsoft MS Doc XML – this “open” spec f.ex. reportedly defines that one functions works “as Word 95”
- A company pushing changes that nobody else really understands, thus undermining compatibility of implementations
- Pushing such changes that an implementor can claim compliance to the standard yet implement it so that his products only work with his products
- Crazy/proprieetary protocol extensions, patenting/trademarking/copyrighting the spec (e.g. TM on Bluetooth)
- Hijacking-safe protocol creation process (beware “predatory maliciousness”):
- The specs is GPL => nobody can capture it (e.g. ZeroMQ)
- The community has clear rules and deals with trolls by kicking them out
- There is a good process for evolving the spec
- How to spec i protocol?
- Start with a very, very small and simple contract – only add things that you desperately need – e.g. ZeroMQ v1 had no versioning, security, metadata (versioning added in v2, metadata in v3, ecurity later). You don’t know what you really need until you try it. F.ex. even the original AMQP has 60-75% waste in it!!!!
- Do very slow and gradual evolution
- Layering is crucial – keep your protocol on one layer, only specify relevant things, leave the other layers for other specs so they can evolve and age in different speed; the more in a spec the earlier will st. be outdated (Pizza contract says nothing about the kitchen, f.ex.)
- Community and cooperation (See the Ch.6 The ØMQ Community mentioned above.)
- community needs clear rules to keep trolls away (and they always pop up)
- don’t just absorb the damage trolls do, ban them
- self-org., decentralized team
PureScript (Bodil Stokke)
PureScript is very Haskell-like language compiled to JS. It is a pure functional lang, effects are performed only via the Effect Monad (Eff). It is pragmatic w.r.t. interoperability – it is possible to embedd JS code and just add a signature to it, the compiler will trust it.
Moreover, you can use property-basd testing with QuickCheck and Functional Reactive Programming with Bodil’s Signal library. Isn’t that wonderful?!
Category theory notes:
- Semigroup is a domain with a cumulative operation (e.g. ints with +)
- Monoid (?) is a semigroup with a unit element, i.e. one where “element operation unit = element” as 0 for + or 1 for *.
Megacore, Megafast, Megacool (Kevin Hammond)
Interesting research project ParaPhrase for parallelization of code through automatic refactoring and application of any of supported topologies (Farm, Pipeline, Map, …) – ParaPhrase-ict.eu and www.project-advance.eu (in general the promises of automatizatio regarding fixing software development problems have have hugely underdelivered but still something might be possible). In some cases their solution managed to do in hours what a developer did in days.
Quote Bob Harper:
The only thing that works for parallelism is functional programming
PS: C++17 is going to have support for parallel and concurrent programming.
Categories for the Working Programmer (Jeremy Gibbons)
An elementary intro into category theory in 10 points, yet I got immediately lost. It might be worth to study the slides and look for more resources at the author’s uni page and not-so-active blog Patterns in Functional Programming .
NoSQL is Dead (Eric Redmond)
Main message: There is just too many differences between NoSQL DBs for this expression to be meaningful.
I had an inpiring lunch chat with Chad and a Polish lady whose name I unfortunately don’t know. Their companies do fascinating stuff to leverage the potential of their humans – one has replaced top-down management wrt. projects with environment where there are clear objectives (increase monthly active users) and the freedom to come with ideas to contribute to them, recruit other people for the idea and, if successful, go implement it (while continually measuring against the objectives). Clearly enough it is not easy, some people have troubles trying to manage everything or doing what they believe in without checking the real implect on the objectives etc. but it already provides more value then before. This has been going on for just a few months so hopefully when it settles more we will hear about their experience.
The other company realized that people are different (wow! how could our industry ignore that?!) and started to go psychological profiling of employees to understand what type of team member they are – a driver, a worker, a critic who is always hunting for possible issues and problems etc. And they compose teams so that they have the right mix of different personalities to avoid both insurpasable conflicts and the risks of group-think.
I believe this is the future of our industry – really understand people and “hack” our organizations to leverage that for greater happiness and better results.
- Jessica Kerr: Simulation of team work and the effect of (no) slack
– what happens when you let your programmers crunch work without any slack time? And when you introduce slack? Jessica has made this Scala simulation to produce the results we would expect – much more even production in the slack case, lot of rework after deploying features in the non-slack version. Not at all scientific but very nice when you want to *show* your higher-ups what happens when you do the former or the latter. Some people hear much more to a visual stimuli (even though totally made to conform to the message you want to get across) than tons of theory.
- Aphyr –  Strong consistency models – “strong consistency” is a much broader term than I expected and not all consistency models are so consistent :-) Check out especially this consistency family tree image.
Posted by Jakub Holý on June 26, 2014
Pack Publishing has asked me to review their new book, Clojure for Machine Learning (4/2014) by Akhil Wali. Interested both in Clojure and M.L., I have taken the challenge and want to share my impressions from the first chapters. Regarding my qualification, I am a medium-experienced Clojure developer and have briefly encountered some M.L. (regression etc. for quantitive sociological research and neural networks) at the university a decade ago, together with the related, now mostly forgotten, math such as matrices and derivation.
In short, the book provides a good bird-eye view of the intersection of Clojure and Machine Learning, useful for people coming from both sides. It introduces a number of important methods and shows how to implement/use them in Clojure but does not – and cannot – provide deep understanding. If you are new to M.L. and really like to understand things like me, you want to get a proper textbook(s) to learn more about the methods and the math behind them and read it in parallel. If you know M.L. but are relatively new to Clojure, you want to skip all the M.L. parts you know and study the code examples and the tools used in them. To read it, you need only elementary knowledge of Clojure and need to be comfortable with math (if you haven’t worked with matrices, statistics, or derivation and equations scare you, you will have a hard time with some of the methods). You will learn how to implement some M.L. methods using Clojure – but without deep understanding and without knowledge of their limitations and issues and without a good overview of alternatives and the ability to pick the best one for a particular case.
Posted by Jakub Holý on May 19, 2014
The other day I got this little helpful exception from Clojure:
(cond (>= nil 1) :unreachable) ;=> NullPointerException [trace missing]
– no line number or anything to troubleshoot it.
It turns out it is not Clojure’s failure but a HotSpot optimization that can apply to
ClassCastException. The remedy is to run the JVM with
From Oralce JDK release notes:
The compiler in the server VM now provides correct stack backtraces for all “cold” built-in exceptions. For performance purposes, when such an exception is thrown a few times, the method may be recompiled. After recompilation, the compiler may choose a faster tactic using preallocated exceptions that do not provide a stack trace. To disable completely the use of preallocated exceptions, use this new flag:
Many thanks to Ivan Kozik for the info!
Posted by Jakub Holý on April 30, 2014
What my Clojure code is doing most of the time is transforming data. Yet I cannot see the shape of data being transformed – I have to know what the data looks like on the input and hold a mental model of how they change at each step. But I make mistakes. I make mistakes in my code so that the data does not correspond anymore to the model it should follow. And I make mistakes in my mental model of what the data currently looks like, leading likely to a code error later on. The end result is the same – a little helpful exception at some later step regarding wrong shape of data. There are two problems here: The error typically provides too little useful information and it usually manifests later than where the code/model mistake actually is. I therefore easily spend an hour or more troubleshooting these mistakes. In addition to that, it is also hard to read such code because a reader lacks the writer’s mental model of the data and has to derive it herself – which is quite hard especially if the shape of the input data is not clear in the first place.
I should mention that I of course write tests and experiment in the REPL but I still hit these problems so it is not enough for me. Tests cannot protect me from having a wrong model of the input data (since I write the [unit] tests based on the same assumptions as the code and only discover the error when I integrate all the bits) and even if they help to discover an error, it is still time-consuming the root cause.
Can I do better? I believe I can.