Skip to main content

Summers of Code, 2006-2013

A retrospective of 8 years of SoC, and lessons learned

A compilation of Haskell-related student projects 2006-2013, with evaluations of their usefulness to the Haskell community, thoughts on what makes a good project, and predictions for 2011-2013.

As part of Google’s Summer of Code program, they sponsor 5-10 SoC projects for Haskell. The Haskell Summer of Codes have often produced excellent results, but how excellent is excellent? Are there any features or commonalities between successful projects or unsuccessful ones? A retrospective review of projects suggests to me that while many of the Haskell Summer of Code projects have delivered valuable products the Haskell community was in great need of and which have since become widely adopted, a number of them have failed for what likely could have been foreseen reasons: low importance/unclear uses, and over-ambition.

Example Retrospective: Debian

In 2009, a blogger & Debian developer wrote a four part retrospective series on the 2008 Debian Summer of Code projects. The results are interesting: some projects were a failure and the relevant student drifted away and had little to do with Debian again; and some were great successes. I don’t discern any particular lessons there, except perhaps one against hubris or filling unclear needs. I decided to compile my own series of retrospectives on the Haskell Summers of Code.

Judging Haskell SoCs

Google describes SoC as

“…a global program that offers students stipends to write code for open source projects. We have worked with the open source community to identify and fund exciting projects for the upcoming summer.”

or

“…a global program that offers student developers stipends to write code for various open source software projects. We have worked with several open source, free software, and technology-related groups to identify and fund several projects over a three month period. Since its inception in 2005, the program has brought together over 4500 successful student participants and over 3000 mentors from over 100 countries worldwide, all for the love of code. Through Google Summer of Code, accepted student applicants are paired with a mentor or mentors from the participating projects, thus gaining exposure to real-world software development scenarios and the opportunity for employment in areas related to their academic pursuits. In turn, the participating projects are able to more easily identify and bring in new developers. Best of all, more source code is created and released for the use and benefit of all.”

It is intended to produce source code for the ‘use and benefit of all’; it is not meant to produce academic papers, code curiosities, forgotten blog posts, groundwork for distant projects, but ‘exciting’ new production code. This is the perspective I take in trying to assess SoC projects: did it ship anything? If standalone, are the results in active use by more than a few developers or other codebases? If a modification to an existing codebase, was it merged and now is actively maintained1? And so on. Sterling Clover argues that this is far too demanding and does not consider whether an involved student is energized by his contribution to go on and contribute still more2; I disagree about the former, and I have not done the latter because it would be too labor-intensive to track down every student and assess their later contributions, which would involve still more subjective appraisals3. (Perhaps in the future I or another Haskeller will do that.)

Haskell Retrospective

Haskell wasn’t part of the first Summer of Code in 2005, but it was accepted for 2006. We start there

2006

The 2006 homepage lists the following projects:

4 successful; 2 unsuccessful; and 2 failures.

2007

The 2007 homepage lists:

6 successes; 2 unsuccessful; 1 unknown.

See Also

2008

The 2008 homepage isn’t kind enough to list all the projects, but it does tell us that only 7 projects were accepted by Google.

So we can work from the code.google.com page which lists 6:

  • “C99 Parser/Pretty-Printer”; by Benedikt Huber, mentored by Iavor Diatchki

    Successful. The first try failed, but the second won through, and now people are doing things like parsing the Linux kernel with it.

  • “GMap - Fast composable maps”; by Jamie Brandon. mentored by Adrian Charles Hey

    Unsuccessful. GMap is on Hackage, but there are 0 users after 3 years.

  • “Haskell API Search”; Neil Mitchell, mentored by Niklas Broberg

    Successful. The improved performance and search capability have made it into Hoogle releases, and Hoogle is one of the more popular Haskell applications (with 1.7m web searches).

  • “Cabal ‘make-like’ dependency framework”; Andrea Vezzosi, mentored by Duncan Coutts

    Unsuccessful. (His code wound up becoming hbuild, which is not on Hackage or apparently used by anyone.)

  • “GHC plugins”; Maximilian Conroy Bolingbroke, mentored by Sean Seefried

    Unsuccessful? As of January 2010, the patch adding plugins functionality has yet to be accepted & applied; as of February 2011, the ticket remains open and the code unmerged. The code is apparently not yet bitrotten by the passage of 3 years but how long can its luck last? The code was finally merged in 2011-08-04; the docs do not list any users.

  • “Data parallel physics engine”; Roman Cheplyaka, mentored by Manuel M. T. Chakravarty

    Unsuccessful. It seems to be finished but no use made of the actual engine that I can see mentioned on the engine’s blog. (I would give reverse dependency statistics, but Hpysics seems to have never been uploaded to Hackage.)

  • “GHC API”; Thomas Schilling, mentored by Simon Marlow

    Unsuccessful. Schilling’s fixes went in, but they were in general minor changes (like adding the GHC monad) or bug-fixes; the GHC API remains a mess.

2 successful, 5 unsuccessful.

Don Stewart’s View

Don Stewart writes in reply to the foregoing:

“We explicitly pushed harder in 2008 to clarify and simplify the goals of the projects, ensure adequate prior Haskell experience and to focus on libraries and tools that directly benefit the community.

And our success rate was much higher.

So: look for things that benefit the largest number of Haskell developers and users, and from students with proven Haskell development experience. You can’t learn Haskell from zero on the job, during SoC.”

See Also

2009

5 projects were accepted this year; Darcs tried to apply in its own right was rejected.

In general, these looked good. Most of them will be widely useful – especially the Darcs and Haddock SoCs – or address long-standing complaints (many criticisms of laziness revolve around how unpredictable it makes memory consumption). The only one that bothers me is the EclipseFP project. I’m not sure Eclipse is common enough among Haskellers or potential Haskellers to warrant the effort5, but at least the project is focused on improving an existing plugin than writing one from scratch. The 5 were:

3 successful, 1 unknown, 1 unsuccessful.

2010

7 projects were accepted:

  • Improvements to Cabal’s test support; Thomas Tuegel, mentored by Johan Tibell

    Successful? The functionality is now in a released version of cabal-install and a number of packages use the provided test syntax.6

  • Infrastructure for a more social Hackage 2.0; Matthew Gruen, mentored by Edward Kmett

    Unknown. Gruen’s blog was last updated October 2010, and Hackage still hasn’t switched over and gotten the new features & benefit of the rewrite. But the code exists and there is a running public demo, so this may yet be a success.

  • A high performance HTML generation library; Jasper Van der Jeugt, mentored by Simon Meier

    Successful blaze-html has been released and is actively developed; version 0.4.0.0 has 50 total reverse dependencies at writing and blaze-builder has 97 reverse dependencies though there’s much overlap. (This site is built on hakyll, which uses blaze-html.)

  • Improvements to the GHC LLVM backend; Alp Mestanogullari, mentored by Maximilian Bolingbroke

    Unsuccessful. Dan Peebles in #haskell says that Alp’s SoC never got off the ground when his computer died at the beginning of the summer; with nothing written or turned in, this can’t be considered a successful SoC, exactly. But could it have been?

    The LLVM backend is still on track to become the default GHC backend7, suggesting that it’s popular in GHC HQ (and the DDC dialect), and it seems to also be popular among Haskell bloggers. The scope is restricted to taking a working backend and optimizing it. In general, it seems like a decent SoC proposal, and better than the next one:

  • Implementing the Immix Garbage Collection Algorithm; Marco Túlio Gontijo e Silva, mentored by Simon Marlow

    Unsuccessful. The GHC repository history, as of 4 February, contains no patches adding Immix GC. Silva writes in his blog’s SoC summary that “Although the implementation is not mature enough to be included in the repository, I’m happy with the state it is now. I think it’s a good start, and I plan to keep working on it.” (His new blog, begun in August 2010, contains no mention of Immix work.) The GHC wiki says that “it’s functional, doesn’t have known bugs and gets better results than the default GC in the nofib suite. On the other hand, it gets worse results than the default GC for the nofib/gc suite.” Marco said in a Disqus comment on this page:

    Hi. I wondered about continuing my work on the Immix GC collector, but Simon Marlow, my mentor, thought it was not a good idea to invest more effort on Immix. So I dropped it, and started working on other things. Greetings.

  • “Improving Darcs Performance”; Adolfo Builes, mentored by Eric Kow

    Successful. This replaced a previous proposal to write a Haskell binding to the GObject library, which never started. Looking through the Darcs repository history, I see a number of new tests related to the global cache, but no major edits to cache-related modules. The Darcs wiki reports it as a successful and closing some bugs.

  • Improving Darcs’s network performance; Alexey Levan, mentored by Petr Rockai

    Successful. Levan divided his SoC into 2 parts, improving Darcs’s performance in fetching the many small files that make up a repository’s revision history, and writing ‘a smart server that can provide clients with only files they need in one request’. The ‘smart server’ seems to have been abandoned as not being worthwhile, but the fetching idea was implemented and will be in the 2.8 release.

    The basic idea is to combine all the small files into a single tarball which can be downloaded at full speed, and avoid the latency of many roundtrips. The 2.8 release description claims that when darcs optimize --http was used on the Darcs repository, a full download went from 40 minutes to 3 minutes. This feature would not be enabled by default, but the gain for larger repositories would be large enough that I feel comfortable classifying it as a successful SoC.

Predicting 2010 Results

Borrowing from our 3 cardinal sins of SoCs, and per my usual practice of testing my understanding by making predictions, what predictions do I make about the 2010 SoCs?

Most of the 7 SoCs are laudably focused on an existing application. You don’t need to justify a speedup of normal Darcs operations because there’s an installed base of Darcs users that will benefit; a new GC for GHC or a LLVM backend will benefit every Haskeller; better Cabal support for testing may go unused by many package authors who either have no tests or don’t want to bother - but a fair number will bother, and it will get maintained as part of Cabal, and similarly for the Hackage 2.0 project.

The Immix GC strikes me as a very challenging summer project; a GC is one of the most low-level pieces of a functional language and is intertwined with all sorts of code and considerations. It would not surprise me if that project wound up just getting a little closer to a working Immix GC but not producing a production-quality GC scheduled to come to compilers near you.

2 in particular concern me as potentially falling prey to sins #2 & 3: the GObject-binder tool, and the high-performance HTML library:

  1. Let’s assume that the HTML library does wind up as being faster than existing libraries, and as useful - that compromises don’t destroy its utility. Who will use it? It will almost surely have an API different enough from existing libraries that a conversion will be painful. There are roughly 42 users of the existing xhtml-generating library; will their authors wish to embrace a cutting-edge infant library? Is HTML generation even much of a bottleneck for them? (Speaking just for Gitit, Pandoc and its HTML generation are not usually a bottleneck.)

  2. The case against the GObject project makes itself; GTK2Hs isn’t as widely used as one would expect, and this seems to be due to the difficulty of installation and its general complexity. So there are few users of existing libraries; would there be more users for those libraries no one has bothered to bind nor yet clamored for? (This project might fall afoul of sin #1, but I do not know how difficult the GObject data is to interpret.)

2010 Results

As of February 2010, I grade the 7 SoC for 2010 as follows: 4 successes, 1 unknown, and 2 unsuccessful. (One unknown, Hackage 2.0, will probably turn out to be a success if it ever goes live as the main Hackage site; as of 2013-01-01, it has not.) As one would hope, the results seem to be better than the results for 2008 or 2009.

Of my original predictions, I think I was right about the Immix GC & GObject & Darcs optimizations, semi-right about Hackage 2.0 & Cabal testing support, somewhat wrong about the LLVM work, and completely wrong about the HTML/blaze SoC. (I am not sure why I was wrong about the last, and don’t judge myself harshly for not predicting the exogenous failure of the LLVM SoC.)

2011

Haskell.org got 7 projects again for 2011. They are:

  1. “Improve EclipseFP”; Alejandro Serrano, mentored by Thomas Schilling

    “Eclipse is one of the most popular IDEs in our days. EclipseFP is a project developing a plug-in for it that supports Haskell. Now, it has syntax highlighting, integration of GHCi and supports some properties of Cabal files. My idea is to extend the set of tools available, at least with:

    • Autocompletion and better links to documentation,

    • A way to run unit tests within Eclipse,

    • More support for editing Cabal files visually, including a browser of the available packages.”

  2. “Simplified OpenGL bindings”; Alexander Göransson, mentored by Jason Dagit

    “Modernize and simplify OpenGL bindings for Haskell. Focus on safety, shaders and simplicity.”

  3. “Interpreter Support for the Cabal-Install Build Tool”; anklesaria, by Duncan Coutts

    “This project aims to provide cabal-install with an ‘repl’ [cabal ghci?] command by adding to the Cabal API. This would allow package developers to use GHCi and Hugs from within packages requiring options and preprocessing from Cabal.”

  4. “Convert the text package to use UTF-8 internally”; Jasper Van der Jeugt, by Edward Kmett (detailed proposal)

    “For Haskell projects handling Unicode text, the text library offers both speed and simplicity-of-use. When it was written, benchmarks indicated that UTF-16 would be a good choice for the internal encoding in the library. However, these (rather artificial) benchmarks were did not take into account the time taken to

    1. decode the ‘Real World’ data and

    2. encode it to write it back.

    I propose to

    1. benchmark and

    2. convert the library to UTF-8 if it is a faster choice for ‘Real World’-applications.”

  5. “Build multiple Cabal packages in parallel”; Mikhail Glushenkov, by Johan Tibell

    “Cabal is a system for building and packaging Haskell libraries and programs. This project’s aim is to augment Cabal with support for building packages in parallel. Many developers have multi-core machines, but Cabal runs the build process in a single thread, only making use of one core. If the build process could be parallelized, build times could be cut by perhaps a factor of 2-8, depending on the number of cores and opportunity of parallel execution available.”

  6. “Darcs Bridge”; Owen Stephens, by Ganesh Sittampalam

    “My proposed project is to create a generic bridge that will enable easy interoperability and synchronisation between Darcs and other VCSs. The bridge will be designed to be generic, but the focus of this project will be Darcs2 ↔︎ Git and Darcs2 ↔︎ Darcs1. The bridge should allow loss-less, correct conversion to and from Darcs repositories, allowing users to use the tool that suits them and their project best, be that Darcs as it currently exists, or another tool.”

  7. “Darcs: primitive patches version 3” (expanded blog description); Petr Ročkai, by Eric Kow

    “Darcs, a revision control system, uses so-called patches to represent changes to individual version-controlled files, where the ‘primitive’ patches are the lowest level of this representation, capturing notions like ‘hunks’ (akin to what diff(1) produces), token replace and file and directory addition/removal. I propose to implement a different representation of these primitive patches, hoping to improve both performance and flexibility of darcs and to facilitate future development.”

Predicting 2011 Results

Which seem like good selections for SoC, and which seem less appropriate?

  1. #1 is the second EclipseFP SoC, after a failed 2009 attempt; why should we think this one will do better?

  2. With #2, the fear is that the result will not be used; there is an OpenGL binding already, after all, and I haven’t heard that there are very many people who want to do OpenGL graphics but were deterred by complexity or danger in it.

  3. cabal ghci is a long-requested Cabal feature, and it sounds as if all the groundwork and experimentation has been done. I have no problem with this one.

  4. Benchmarking sounds quite doable, and text is increasingly used; but if I had to criticize it, I would criticize it for underambition, for sounding too modest and not a good use of a slot.

  5. #5 is a second crack at the parallel compilation problem (building on a 2008 SoC) and is troubling in the same way the EclipseFP SoC is.

  6. There are multiple existing Darcs->other VCS programs, so the task is quite doable. An escape hatch would be very valuable for users (even if rarely used).

  7. This one sounds tremendously speculative to me.

    I respect Ročkai & Kow, but in idling on #darcs and reading the occasional Darcs-related emails & Reddit posts, I don’t know of any fully worked out design for said patch design, which makes it a challenging theoretical problem (patch theory being general & powerful), a major implementation issue (since the existing primitive patches are naturally assumed all throughout the Darcs codebase), and difficult to verify that it will not backfire on users or legacy repositories. All in all, #7 sounds like the sort of project where the best case scenario is a repository branch/fork somewhere that few besides the author understands, which is better on some usecases and worse on others, but not actually in general use. That might be a success by the Darcs’s team’s lights, but not in the sense I have been using in this history.

To summarize my feelings:

  • #1 seems a bit doubtful but is more likely to succeed (because presumably most of the heavy lifting was done previously).

  • I predict #2 & #7 will likely fail

  • I would be mildly surprised if both #3 & #5 succeed - since they’re challenging and long-request Cabal features - but I expect at least one of them to succeed. Which, I am not sure.

  • I expect with confidence that #4 & #6 will succeed.

2011 Results

  1. “Improve EclipseFP”; Alejandro Serrano, mentored by Thomas Schilling

    Successful. The coding was finished, to the author’s apparent satisfaction, and the work was included in the 2.1.0 release.

  2. “Simplified OpenGL bindings”; Alexander Göransson, mentored by Jason Dagit

    Unsuccessful. Jason Dagit says Alexander never started for unknown personal reasons and so no work was ever done (no OpenGLRawNice library exists, a post-August 2011 Google search for “Alexander Göransson OpenGL” is dry, nothing on Hackage seems to mention OpenGL 4.0 support, etc.).

  3. “Interpreter Support for the Cabal-Install Build Tool”; anklesaria, by Duncan Coutts

    Unsuccessful? anklesaria’s final post, “Ending GSoC”, says the work is done and provide a repository with patches by amsay@amsay.net - but no patches by that email appear in the Cabal repository as of 2011-12-10; nor does there appear to be any discussion in the cabal-dev ML archives.

  4. “Convert the text package to use UTF-8 internally”; Jasper Van der Jeugt, by Edward Kmett

    Successful. Jasper published 2 posts on benchmarking the converted text against the original (“Text/UTF-8: Initial results” & “Text/UTF-8: Studying memory usage”); discussing the results in “Text/UTF-8: Aftermath”, the upshot is that the conversion has a real but small advantage, potentially would cause interoperability problems, requires considerable testing, and won’t be merged in (the fork will be maintained against hopes of future GHC optimizations). Jaspers says the benefits wound up being a bigger & cleaner test/benchmark suite, and some optimizations made for the UTF-8 version can be applied to the original. Since Edward Kmett seems pleased, I have marked it a success (although I remain dubious about whether it was a good SoC).

  5. “Build multiple Cabal packages in parallel”; Mikhail Glushenkov, by Johan Tibell

    Successful Glushenkov reported in “Parallelising cabal-install: Results” that the patches were done and people could play with his repository; the comments report that it works and does offer speedups. However, as before, no patch by him appears in the mainline Cabal, and the last discussion was 2011-11-06 where he provides a patch bundle. No one commented; Mikhail says the patches may be “too invasive” and need reworking before merging.8 The code was ultimately released as part of cabal-install 1.16 and is reportedly working well.

  6. “Darcs Bridge”; Owen Stephens, by Ganesh Sittampalam

    Successful? Owen’s GSoC blog posts conclude with “GSoC: Darcs Bridge - Results” summarizing the final features: he succeeded in most of the functionality. Brent Yorgey tells me that he has successfully used the tool to convert repositories to put onto Github, but says there are “some critical bugs” and use is still “clunky” (eg. currently requiring Darcs HEAD; see the usage guide on the Darcs wiki). Whether the bugs will be fixed and the package polished to the point where it will be widely used remains to be seen.

  7. “Darcs: primitive patches version 3”; Petr Ročkai, by Eric Kow

    Unsuccessful. Ročkai wrote two posts (“soc reloaded: progress 1” & “soc reloaded: Outcomes”). This seems to have turned out as I predicted above:

    “Since my last report, I have decided to turn somewhat more radical again. The original plan was to stick with the darcs codebase and do most (all) of the work within that, based primarily on writing tests for the testsuite and not exposing anything of the new functionality in an user-visible fashion. I changed my mind about this. The main reason was that the test environment, as it is, makes certain properties hard to express: a typical test-suite works with assertions (HUnit) and invariants (QuickCheck). In this environment, expressing ideas like ‘the displayed patches are aesthetically pleasing’ or ‘the files in the repository have reasonable shape’ is impractical at best. An alternative would have been to make myself a playground using the darcs library to expose the new code. But the fact is, our current codebase is entrenched in all kinds of legacy issues, like handling filenames and duplicated code. It makes the experimenter’s life harder than necessary, and it also involves rebuilding a whole lot of code that I never use, over and over. All in all, I made a somewhat bold decision to cut everything that lived under Darcs.Patch (plus a few dependencies, as few as possible) into a new library, which I named patchlib, in the best tradition of cmdlib, pathlib and fslib. At that point, I also removed custom file path handling from that portion of code, removed the use of a custom Printer (a pretty-printer implementation) module and a made few other incompatible changes.”

    The remaining work?

    “The obvious future work lies in the conflict handling. There are two main options in this regard: either re-engineer a patch-level, commute-based representation of conflicts (in the spirit of mergers and conflictors), as V3 ‘composite’ patches, or alternatively, use a non-patch based mechanism for tracking conflicts and resolutions. It’s still somewhat early to decide which is a better choice, and they come with different trade-offs. Nevertheless, the decision, and the implementation, constitute a major step towards darcs 3. The other major piece of work that remains is the repository format: in this area, I have done some research in both the previous and this year’s project, but there are no definitive answers, even less an implementation. I think we now have a number of good ideas on how to approach this. We do need to sort out a few issues though, and the decision on the conflict layer also influences the shape of the repository.

    Each of these two open problems is probably about the size of an ambitious SoC project. On top of that, a lot of integration work needs to happen to actually make real use of the advancements. We shall see how much time and resources can be found for advancing this cause, but I am relatively optimistic: the primitive level has turned out fairly well, and to me it seems that shedding the shackles of legacy code sprawl can boost the project as a whole significantly forward.”

    As I wrote before, the Darcs team will disagree with my assessment, but I believe marking it ‘Unsuccessful’ is most consistent with how all previous SoCs have been judged9.

So of the 7 2011 SoCs:

  • 3 were unsuccessful (2 possibly not)

  • 4 were successful (1 possibly not)

My predictions were in general accurate; I remained hopeful that at least one of the Cabal SoCs would be merged in, which would give me a clean sweep and also render the final 2011 SoC record as good as the 2010 SoC record. (The parallel build was eventually merged in during 2012.)

It troubles me that the Cabal SoCs took so long to be merged in (if at all), in line with the historical trend for big Cabal SoC improvements to be partially done but never go into production. Duncan Coutts says they are in the queue, but if neither gets merged in before the 2012 SoC starts, the lesson seems to be that Cabal is too dangerous and uncertain to waste SoCs on.

2012

In 2012, Haskell.org was bumped to 8 slots:

  1. “Patch Index Optimization for Darcs” (proposal); BSRK Aditya, mentored by Eric Kow

    The goal of this project is to speed up the darcs changes and darcs annotate commands using a cache called “patch index”. The slow speed of these commands is one of the major user grievance in darcs. Patch-Index data structures can quickly identify the patches that modified a given file.

  2. “Scoutess - a build manager for cabal projects”; DMcGill, mentored by Alp Mestanogullari

    Scoutess is a tool for package maintainers and automates a lot of the hassle of dealing with dependencies and multiple versions of libraries. It will create a sandboxed environment simulating a fresh Haskell Platform install, attempt to build your project using Cabal and highlight any problems while also tracking changes or updates to dependencies located in remote repositories so these can be tested against as well.

  3. “Implement Concurrent Hash-table / Hash map” (proposal); Loren Davis, mentored by Ryan Newton

    Concurrent data structures for Haskell are currently a work in progress, and are necessary for parallel and high-performance computing. A few data structures, such as wait-free lists, already have Haskell implementations. One that does not yet is a thread-safe hash table. I propose to implement one as a library available under the new BSD license.

  4. “Accelerating Haskell Application Development”; mdittmer, mentored by Michael Snoyman

    A project for improving performance of “lively developer mode” environments that require fast rebuild-and-redeploy routines.

  5. “Sandboxed builds and isolated environments for Cabal”; Mikhail Glushenkov, mentored by Johan Tibell

    The aim of this project is to integrate support for sandboxed builds into Cabal, a system for building Haskell projects. There are several different third-party implementations of this functionality already available, but it would be beneficial (from the points of ease of use and focusing the community efforts) to have an unified and polished solution integrated into Cabal itself. Additionally, this project is a step in the direction of solving the infamous “dependency hell” problem of Cabal.

  6. “Enable GHC to use multiple instances of a package for compilation” (proposal); Philipp Schuster, mentored by Andres Löh

    People are running into dependency hell when installing Haskell packages. I want to help move in the direction of solving it.

  7. “multiuser browser-based interactive ghci, hpaste.org meets tryhaskell.org, for improved teaching of those new to Haskell.” (proposal); Shae Erisson, mentored by Heinrich Apfelmus

    Many new users learn Haskell from the #haskell IRC channel. lambdabot’s mueval is good for interactive teaching, but only allows short code snippets. hpaste allows large snippets to be shared, but not into an interactive ghci. Chris Done’s Try Haskell! allows larger snippets to be loaded, but is not explicitly multi-user. If Try Haskell! allowed multiple users to view the same interpreter state, and allowed users to paste in new code, teaching and debugging would be much easier for people new to Haskell.

  8. “Haskell-Type-Exts”; Shayan Najd, mentored by Niklas Broberg

    Following the proposal by Niklas Broberg [0], I am highly eager to expand the existing typechecker [1] for Haskell-Src-Exts [2] to support most of the features available in Haskell2010 with the major extensions like GADTs, RankNTypes and Type-Functions. It is done by following the guidelines of “Typing Haskell in Haskell” [3] as the basis; adding support for RankNTypes [5]; and then introducing GADTs and Type-Functions by local assumptions [4]. [0] https://web.archive.org/web/20060517222319/http://hackage.haskell.org/trac/summer-of-code/ticket/1620 [1] https://hackage.haskell.org/package/haskell-src-exts [2] https://hackage.haskell.org/package/haskell-type-exts [3] M. P. Jones. Typing Haskell in Haskell [4] D. Vytiniotis, S. Peyton Jones, T. Schrijvers, M. Sulzmann. OutsideIn(X) - Modular type inference with local assumptions

2012 Predictions

Instead of qualitative predictions this year, I will record probabilities on PredictionBook.com; all predictions assume delivery, but some need different judgment criteria.

  1. 80%

    According to the proposal, the core patch index code has already been implemented & benchmarked by the student, who has worked on Darcs before (flying 9 hours into England for a hacking meetup). The rest of the work sounds reasonable, and the project is not overreaching at all. I fully expect this to work out (even if darcs annotate is not a command I use every month, much less day). The main risk seems to be life events, but SoCs failing due to personal issues are relatively rare and affect <20% of past projects.

    This SoC will be judged successful if it is in Darcs HEAD, or at least scheduled for application, by my usual deadline: 2013-01-01.

  2. 40%

    No proposal was publicly available. I am not familiar with DMcGill, and Googling for Haskell material related to “McGill”, I don’t see any past work. It sounds relatively ambitious in the short abstract - replicating cabal-dev and adding in considerable other functionality? I’ve previously noticed that Cabal-related SoCs seem to be unusually blighted. Adding that all up, I am left with dubious feelings.

    Judgment: tools are always hard to judge. This one will be the usual subjective “is it being used by a good fraction of its potential userbase?” criteria.

  3. 60%

    The student has completed a SoC before, and is a graduate student in an AI/machine learning program; both of which bode well for him completing another SoC. I’m not actually sure how many Haskell applications need a concurrent hashtable - the existing hashtables package has 3 users, and base’s “Data.HashTable” module is used by perhaps 10-20 code repositories (judging from grepping through my local archive).

    It is unreasonable to expect it to supersede base, which has had something like a decade to gain users, but equaling the obscure hashtables package seems reasonable. Judgment will be whether there are >=3 reverse dependencies.

  4. 40%

    As stated, I have no idea what this SoC is about. I don’t know the student, although Snoyman seems to write a great deal of code and successful code at that, which is a good sign - if he agreed to mentor it, surely the idea can’t be that bad?

    Since I don’t know what it is, I cannot specify a judgment criteria in advance.

  5. 75%

    Both student & mentor are experienced Haskell hackers, and have worked with the Cabal codebase. As the abstract says, sandboxed builds are not a novel feature. cabal-dev is popular among developers, so it stands to reason that a polished version inside Cabal itself would be even more popular. I see little reason this could not be successful, aside from the general challenge of working with Cabal.

    Judgment; sandboxed build functionality in Cabal HEAD or scheduled to be applied soon.

  6. 65%

    Judgment similar to above: patches scheduled for GHC HEAD or already applied.

  7. 80%

    Shae is an experienced Haskeller & professional developer (to the extent I was very surprised to hear that he had applied). The proposal seems like a very reasonable addition, and I do not think it is too difficult to modify the mueval codebase10.

    Judgment: whether multi-user sessions have gone live.

  8. 55%

    Here again I regret the absence of a public proposal. I’m not sure how useful this one is, how hard it is, or how much progress the prototype library on Hackage represents, nor do I know any comparable libraries I could check for a reverse dependency count. I don’t know the student, but Broberg is a capable Haskeller.

    Judgment criteria: punting to checking for >=3 reverse-dependencies/users.

2012 Results

As of 2013-01-01:

  1. Darcs patch index

    Merged into Darcs HEAD without apparent issue (documentation). Project was successful.

  2. scoutess: As of August 15, McGillicuddy was reporting that scoutess was complete (repository). In Haskell-cafe, there is one off-hand mention of scoutess by someone using a different continuous integration program; there are a few discussions on Reddit of progress but the most recent post is a theoretical discussion of scoutess’s architecture. There are no tools or libraries depending on it in Hackage because scoutess has never been uploaded to Hackage. Indeed, as far as I can tell, no one is actually using it, and stepcut agreed with this assessment when I asked him.

    I specified in April 2012 that my judgment criterion would be “is scoutess being used by a good fraction of its potential userbase?”; in this light, scoutess was unsuccessful.

  3. concurrent hashtable/hashmap

    Edward Kmett tells me that Loren ran into personal issues and was removed from SoC by the midpoint with no delivered library. Unsuccessful.

  4. “Accelerating Haskell Application Development”

    Edward Kmett tells me that the student left for a job around the midpoint and was removed from SoC at the last milestone. Unsuccessful. eegreg argues that while incomplete, the first goal of the SoC (a file-watching library) has since been fulfilled and the library been put to use by the Yesod ecosystem of Web libraries & applications.

  5. Sandboxed builds

    Completed and in Cabal HEAD; per my criteria, this is successful.

  6. Multiple packages support in GHC

    The latest information I can find is a GHC documentation page which summarizes the material as: “It is possible to install multiple instances of the same package version with my forks of cabal and ghc. Quite a few problems remain.” A set of slides says “Quite a few problems remain therefore nothing is merged yet.” The code is not in HEAD for either Cabal or GHC, and given the many problems, may never be. Unsuccessful.

  7. Better tryhaskell.org

    Erisson finished in August with a Hackage upload and some nice slides. Unfortunately, there is no live server where one can actually use ghcLiVE; someone suggested that Erisson might’ve given up on the sandboxing aspects which would have made it usable on the public Internet (per his original proposal). One wonders how many people will ever use it, given how much Haskell instruction is done remotely, but maybe it would be useful in offline university classes. In any case, my criterion was clear: “whether multi-user sessions have gone live”; and so despite my high hopes, I must mark this unsuccessful. (Edward Kmett disagrees with this assessment; Erisson sort of agrees and disagrees11.)

  8. Haskell type-checker library

    This one is a little confusing. The Hackage library remains untouched since April 2012, although there is a largely complete library (main missing feature is records support, which is important but not a huge gap) available on Github. Another blog post implies that it is but a small part of a grander research scheme entirely, and that my reverse dependencies judgment criteria is simply off-base entirely although it suggests the SoC was unsuccessful. (The alternative, looking at whether it is pushed to the HEAD of haskell-type-exts, would also suggest unsuccessful.) I am not sure whether this should be considered successful or unsuccessful.

To summarize:

  1. unclear: 1

  2. successful: 2

  3. unsuccessful: 5

2 of the 5 unsuccessful projects were due to problems on the student’s end (hashtable, “accelerating”); 2 were too ambitious in general (scoutess, multiple-packages); and the last 1 was not too ambitious but in my opinion was left somewhat incomplete (ghcLiVE).

How successful were my predictions? Employing a proper scoring rule (log scoring; for additional discussion of scoring rules, see 2012 election predictions) and comparing against a 50-50 random guesser where >0 means I outperformed the random guesser12:

logBinaryScore = sum . map (\(p,result) -> if result then 1 + logBase 2 p else 1 + logBase 2 (1-p))
logBinaryScore [(0.80, True), (0.40, False), (0.60, False), (0.40, False),
                (0.75, True), (0.65, False), (0.80, False)]

-0.3693261451031018

I performed worse than random, in part because 2012 was such a bad year. In particular, I placed great weight on Erisson succeeding (without that prediction, I would score 0.95). In retrospect, I am also disappointed that I assigned the GHC project a high as 65% when I knew GHC projects are as dangerous as Cabal projects and the multiple packages work was a lot of low-level problems with minimal foregoing work.

2013

For 2013, Haskell.org/Darcs picked up a full 11 slots:

  1. “Enhancing Darcsden”; BSRK Aditya, mentored by Ganesh Sittampalam

    The goal of this project is to increase the functionality of Darcsden. Darcsden is an open source repository hosting platform for darcs, written in Haskell. The main features are authentication from Github/OpenID, Password Recovery, Editing repository files online, and Comparison between a repository and its forks.

  2. “Overloaded record fields for GHC”; Adam Gundry, mentored by Simon Peyton Jones

    Haskell’s record system lacks support for overloaded field names. This leads to unnecessarily cluttered code and inhibits code reuse. I propose to implement support for overloaded field names and polymorphic record projection as a GHC extension, with the aim to ultimately add them to the language standard in a future revision. This relatively straightforward change would remove a significant source of frustration for Haskell programmers….By September 23 I will have the final implementation on a GHC branch ready to merge into HEAD…The biggest risk to the project is that it may prove controversial, as has been seen by arguments over previous attempts to solve this problem. However, I am optimistic that sufficient consensus can be reached, as much of the disagreement is about syntax or implementation details rather than the user-visible aspects of the extension

  3. “interactive-diagrams and a paste site with the ability for dynamic rendering of diagrams”; Dan Frumin, mentored by Luite Stegeman

    I want to build an active-diagrams library for compiling diagrams code into active HTML + JS widgets. The diagrams are active in the sense that user can interact with them: for example, a result of type (Bool -> Diagram) should be compiled to a widget that renders a diagram depending on the state of the checkbox. In addition, a pastebin site should be built, that can be used as an interactive scratchpad, where diagrams code can be automatically compiled and the graphical output shown along. This is useful for sharing graphical experiments, teaching beginners and so on. The rendered diagram, together with its interactive capabilities, should be easily embedded in third-party blogs, websites.

  4. “Haddock extension for Pandoc compatibility”; Fūzetsu [Mateusz Kowalczyk?], mentored by Simon Hengel (further discussion)

    Project aiming to extend Haddock to a point where writing reader and writer modules for Pandoc is possible. The general goal is to allow for documentation writing in different formats, including the ever popular Markdown.

  5. “Port Charts to use Diagrams”; jbracker, mentored by Tim Docker

    Right now the Charts library uses Cairo as backend. Cairo can be difficult to build on platforms other then Linux. Goal of the project would be to port the Charts library to be independent of the used backend. Like that it can support to use Diagrams as backend. Diagrams supports a variety of backends (SVG, Postscript, Cairo (Optional!), Tikz, Gtk), some of which are written platform independent. As Diagrams is easy to install on every platform supported by Haskell, so will the Charts Library built upon Diagrams. This will make it easy for Haskellers to add charting capabilities to their own applications and libraries in a manner portable across all platforms without any pain.

  6. “Better record command for darcs”; José Neder, mentored by Guillaume Hoffmann

    The objective of this project is to improve the darcs record command implementing several options…Diffing two given files can produce various correct outputs, depending on the algorithm used. The standard diff algorithm (used in darcs and many other places) has been criticized to sometimes produce counterintuitive diffs. We would like to try out the “patience diff” algorithm (Issue 346), which seems to produce more interesting chunks when used on source code. One downside of patience diff is that it may be slower than classic diff, so performance will have to be evaluated.

    Moreover, just as the existing flag --look-for-adds proposes adding unversioned files to the new patch, we could use a --look-for-moves flag, which would be handful when one wants to record a file move after having done the move, ie, without using darcs move (Issue 642). Another cool flag would be --look-for-replaces, which would detect token renaming when one forgets about using darcs replace (Issue 2209).

    If time allows, a --provide-primitive-patches would be useful for darcs to be called by another program that provides the changes to record. For instance, a web interface providing a simple on-line code edit feature a la GitHub.

  7. “Communicating with mobile devices”; Marcos Pividori, mentored by Michael Snoyman (further discussion)

    The aim of this project is to develop a server-side library in Haskell for sending push notifications to devices running different OS, such as Android, iOS, Windows Phone, BlackBerry, and so on. And, once we have this libraries, investigate the possibility of maintaining a “back and forth” communication between a server and mobile devices and develop a library to handle this.

  8. “Improve the feedback of the cabal-install dependency solver”; Martin, mentored by Andres Löh

    The dependency solver can be a mysterious piece of the installation process. I want to give the user the possibility to see what is happening and also give them a better chance to understand what happened if something does not just work. Further, I would like to enable them to use that information to fix the installation in some cases.

  9. “Parallelise cabal build; Mikhail Glushenkov, mentored by Johan Tibell

    This project aims to add support for module-level parallel builds to Cabal, a system for building and packaging Haskell libraries and programs. I suggest a novel solution for this problem in the context of GHC/Cabal that, unlike existing approaches, will make a more effective use of multi-core CPUs.

  10. “Extending GHC to support building modules in parallel”; Patrick Palka, mentored by Thomas Schilling

    The aim of this project is to implement native support for building modules in parallel in GHC. This entails making the compilation pipeline thread-safe, and writing a parallel compilation driver next to the existing sequential driver. Focus will be placed on correctness and deterministic output, with speed a latent concern. Nothing in the user’s end should change other than there being a new -j flag that specifies the number of modules to build in parallel. The cabal-install project should be augmented with the ability to use this new -j flag to speed up builds, alongside its existing package-level parallelization.

  11. “Haskell Qt Binding Generator”; Zhengliang Feng, mentored by Carter Schonwald & Ian-Woo Kim

    This project aims to provide a generation tool that creating Qt bindings automatically for Haskell and make generation as much automation as possible. It parses Qt header files and generates corresponding Qt-style Haskell interfaces (type classes, data types).

    The mentor clarified in August 2013:

    Speaking as the principle mentor on the relevant GSOC project (Ian is also co-mentoring, so he can correct me if I’m wrong anywhere): The QT binding project is a subproject of the GSoC project whose principal focus for the summer is making a swig + fficxx based C++ FFI wrapper for haskell. The GSoC core project is moving apace, and will likely be usable by the close of the summer. Realistically, the QT bit is gold plating / a nontrivial demo use case. Currently its unclear if we’ll have time to get the QT ball rolling, but a lot of the support tooling for that subproject should be done by the close of the summer!

2013 Predictions

Compiling the 2006-2012 results, I get an overall base-rate of 47% successful, 40% unsuccessful, & 13% unknown projects. With this base rate in mind, I predicted thusly:

  1. Darcsden: 65%

  2. Overloaded records: 40%

  3. interactive-diagrams: 35%

  4. Pandoc Haddock: 55%

  5. Charts on Diagrams: 75%

  6. `darcs record: 65%

  7. mobile push: 25%

  8. dependency-solver error-messages: 40%

  9. Parallel build: 33%

  10. Parallel modules: 33%

  11. Qt binding generator: 40%

2013 Results

As of 2014-01-08:

  1. Darcsden: checking the Darcsden HEAD repository, I see patches by BSRK Aditya adding what looks like GitHub support, a repository comparison feature, resetting passwords, and file editing. I haven’t actually verified that the additions are usable, but I have no reason to be that suspicious and so I will mark this SoC successful.

  2. Overloaded record fields for GHC: Gundry finished work on the extension apparently successfully, and submitted his patches in September 2013. The ghc-devs mailing list discussion petered out, though, and in November 2013 Gundry explained:

    Unfortunately, the extension will not be ready for GHC 7.8, to allow time for the design to settle and the codebase changes to mature. However, it should land in HEAD soon after the 7.8 release is cut, so the adventurous are encouraged to build GHC and try it out. Feedback from users will let us polish the design before it is finally released in 7.10… Keep an eye out for the extension to land in GHC HEAD later this year, and please try it out and give your feedback!

    However, GHC 7.8 apparently took longer to release than expected and so Gundry’s work still has not been merged by 2014. Duncan Coutts and Gundry both seem optimistic that it would get added eventually, but nevertheless, 4 months after being finished, it had not been merged into GHC HEAD and fails my original judgment criterion, rendering it unsuccessful? This is definitely a SoC to revisit in the future to check out whether it ultimately got merged in or was abandoned to bitrot.

  3. Interactive diagrams: Frumin’s summation post links to his working pastebin (paste.hskll.org, which went down some years afterwards). I am surprised, but it’s definitely there. So, marking this one successful.

  4. Haddock extension for Pandoc compatibility: this SoC was turned into a rather different project; it still involved Haddock, but not Pandoc. Unclear what happened - most of the submitted patches do not seem to have been applied to the Github repository for Haddock. Fuuzetsu tells me in an email on 2014-01-08 that the GHC integration had trouble and this delayed incorporation, but that he was actively working on getting the patches in & was optimistic that they would be “very soon” and “at worst before the end of the week”. So this is a very similar situation as Gundry’s overloaded record field patches: the work has not been merged into GHC HEAD (technically failing my criterion of “merged by 2014”), but the student is confident that it will be soon and apparently it was merged 2014-01-12 (so I will probably revise the judgment in the future). Unsuccessful?

  5. Port Diagrams to use Charts: the latest version recommends using the Charts backend, and Bracker declared on diagrams-discuss the project “a full success”. Marking successful.

  6. “Better record command for darcs”: a check of the Darcs HEAD & issue 346 indicates that patience diff made it in, and likewise --look-for-moves, so per my criteria of 2 out of 3, marking successful.

  7. “Communicating with mobile devices”: I specified in my original prediction the usual default criterion of looking for ≥n reverse dependencies, in this case, ≥3. The author’s blog & Github repository indicate he released a trio of packages; checking the reverse dependencies for push-notify/push-notify-ccs/, I see 0 reverse dependencies not to each other (and Hackage indicates 15 total downloads of push-notify as of 2014-01-13, suggesting that there’s not a lot of demand we’re missing by looking at reverse dependencies). Marking unsuccessful.

  8. “Improve the feedback of the cabal-install dependency solver”: Ruderer’s code appears to have not been merged into Cabal HEAD as of 2014-01-13 and there’s nothing on his blog or the cabal-dev mailing list indicating any work towards getting the code merged in. Unsuccessful.

  9. “Parallelise cabal build”: judging from the bug report discussion, merging has gotten bogged down in discussion over pull #1572/bug #1529. Hopefully the work will get merged in, but it has not yet, so this is another unsuccessful?

  10. “Extending GHC to support building modules in parallel”: bug #910 was closed after the patches were merged in. Looks like parallelizing GHC was finally done and I can mark this successful!

  11. “Haskell Qt Binding Generator”: a little confusing, but Feng’s final submission is a tarball providing some source code and a Cabal file naming it to be fficxx-swig; which is not on Hackage at all nor does it have any reverse dependencies. I thought perhaps it was intended to be merged into fficxx (written by Feng’s co-mentor Kim), but a check of the fficxx contributors shows only Ian-Woo Kim as author. A google of “fficxx-swig” turned up something more useful, a hqt repository by ofan/Ryan Feng, which is described as “hqt is a set of tools that generate bindings from Qt library to Haskell automatically. The goal of this project is to make an usable and stable tool that can generate bindings for general C++ libraries.” Perhaps this is what I should be looking at? But hqt turns out to also not be on Hackage, not have any reverse dependencies, and not even be mentioned in a google of hqt site:haskell.org. (A google of haskell "hqt" did turn up an interesting result, though: “hqt - Haskell Quran Toolkit – A Haskell library for dealing with Quran texts.” How about that?) Given the complete absence of evidence that anyone is using hqt in any way, I’m going to have to mark this one unsuccessful.

The quick overview is 5 successful, 6 unsuccessful, so very similar to historical base-rates. Things get a little rosier when I note that 3 of the unsuccessful and 0 of the successful had to be given question marks because the project seemed to succeed but it was still uncertain whether the work would be integrated with HEAD, so if all 3 get merged, then we’d really have 8 successes and 3 failures, which seems like a reasonable rate.

How did my predictions do? The simplest scoring is simply to ask how often I assigned >50% to a success or <50% to a failure; ignoring the question-marks, I then got 8/11. More sophisticated is a log scoring rule taking into account my confidence, and comparing it to a random guesser of 47% (the rough base-rate for success in past years)

let logScore = sum . map (\(result,p) -> if result then log p else log (1-p))
logScore [(True,0.65), (False,0.40), (True,0.35), (False,0.55), (True,0.75), (True,0.65),
          (False,0.25), (False,0.40), (False,0.33), (True,0.33), (False,0.40)]
-6.326876860221628
let br = 0.47 in logScore [(True,br), (False,br), (True,br), (False,br), (True,br), (True,br),
                           (False,br), (False,br), (False,br), (True,br), (False,br)]
-7.5843825560059805

Since smaller is worse under a log score, I managed to beat the base-rate. What happens if I decide to flip all 3 question-marks into successes (the best-case scenario for them)?

logScore [(True,0.65), (True,0.40), (True,0.35), (True,0.55), (True,0.75), (True,0.65),
          (False,0.25), (False,0.40), (True,0.33), (True,0.33), (False,0.40)]
-7.239856330792127

I still beat the base-rate but by a trivial amount. The problem here is that while I was accurate enough in predicting on the 2 Darcs-related projects and the FFI one, I gave the Cabal/GHC-related project low probabilities, and them flipping into success punishes my score. As is correct: I am surprised that those succeeded, and my predictions reflected my expectations. Was I wrong to be pessimistic about them and to be surprised by their success? Should I have expected them to succeed, perhaps under a theory like “third time’s the charm”? Thinking back, I don’t think so. I didn’t have any particular reason to think that this time would be different, or that a special push was going to be made, or that some critical point had been passed. I think that if one just takes enough whacks at a hard nut, it’ll crack eventually, and this year was GHC/Cabal parallelism’s time to crack.

Lessons Learned

So, what lessons can we learn from the past years of SoCs? It seems to me like there are roughly 3 groups of explanations for failure. They are:

  1. Hubris

    GuiHaskell is probably a good example; it is essentially a bare-bones IDE, from its description. It is expecting a bit much of a single student in a single summer to write that!

  2. Unclear Use

    HsJudy is my example here. There are already so many arrays and array types in Haskell! What does HsJudy bring to the table that justifies a FFI dependency? Who’s going to use it? Pugs initially did apparently, but perhaps that’s just because it was there - when I looked at Pugs/HsJudy in 2007, certainly Pugs had no need of it. (The data parallel physics engine is probably another good example. Is it just a benchmark for the GHC developers? Is it intended for actual games? If the former, why is it a SoC project, and if the latter, isn’t that a little hubristic?)

  3. Lack Of Marketing

    One of the reasons Don Stewart’s bytestring library is so great is his relentless evangelizing, which convinces people to actually take the effort to learn and use Bytestrings; eventually by network effects, the whole Haskell community is affected & improved13. Some of these SoC projects suffer from a distinct lack of community buy-in - who used HaskellNet? Who used Hat when it was updated? Indifference can be fatal, and can defeat the point of a project. What good is a library that no one uses? These aren’t academic research projects which accomplish their task just by existing, after all. They’re supposed to be useful to real Haskellers.

Future SoC Proposals

There are 2 major collections of ideas for future SoC projects, aside from the general frustrations expressed in the annual survey:

Let’s look at the first 12 and see whether they’re good ideas, bad ideas, or indifferent.

  1. port GHC to the ARM architecture: It would be a good thing if we could easily compile our Haskell programs for ARM, which is used in many cellphones, but an even better idea would using the LLVM backend to crosscompile. It would be somewhat tricky, but LLVM already has fairly solid cross-compilation support, and making GHC capable of using it seems like a reasonable project for a student to tackle.

  2. “Implement overlap and exhaustiveness checking for pattern matching”: this seems both quite challenging and also a specialized use. I use GADTs rarely, but I suspect that those writing GADT code rarely make overlap or omission errors.

  3. Incremental garbage collection: this may be a good idea depending on how much of the code was already written. But I fear that this would go the way of the Immix GC SoC and would be a bad idea.

  4. “ThreadScope with custom probes”: I don’t understand the description and can’t judge it.

  5. “A simple, sane, comprehensive Date/Time API”: having puzzled over date-time libraries before, I’m all for this one! It’s a well-defined problem, within the scope of a summer, and meets a need. Its only problem is that it doesn’t sound sexy or cool.

  6. “Combine Threadscope with Heap Profiling Tools”: Uncertain. Going by the Arch download statistics, Threadscope is downloaded more often than one would expect, so perhaps integration would be useful.

  7. “Haddock with embedded wiki feature, a la RWH, so we can collaborate on improving the documentation”: This is a bad idea mostly because there are so many diverging ideas and possible implementations - it’s just not clear what one would do. Is it some sort of Haddock server? A Gitit wiki with clever hooks? Some lightweight in-browser editor combined with Darcs?

  8. “HTTP Library Replacement”: A good idea, assuming the linked attempts and alternate libraries haven’t already solved the issue.

  9. “Using Type Inference to Highlight Code Properly: The difficult part is accessing the type information of an identifier inside a GHCi sessions - a problem probably already solved by scion. Colorizing the display of a snippet is trivial. So this would make a bad SoC.

  10. “Transformation and Optimization Tool”: This initially sounds attractive, but previous refactoring tools have been ignored. The tools that have gotten uptake are things like GHC’s -Wall (which warns about possible semantic issues) and hlint (which warns about style issues and redundancy with standard library functions) - not like Hera.

  11. “Webkit-based browser written in Haskell, similar in [plugin] architecture to Xmonad”: This is probably the worst single idea in the whole bunch. A web browser these days is an entire operating system, but worse, one in which one must supply and maintain the userland as well; it is a thankless task that will not benefit the Haskell community (except incidentally through supporting libraries), nor a task it is uniquely equipped for. It is an infinite time sink - the only thing worse than this SoC failing would be it succeeding!

  12. “Add NVIDIA CUDA backend for Data Parallel Haskell”: DPH is rarely used; a CUDA backend would be even more rarely utilized; CUDA has a reputation for being difficult to coax performance out of; and difficulties would likely be exacerbated by the usual Haskell issues with space usage & laziness. (DPH/CUDA use unboxed strict data, but there are interface issues with the rest of the boxed lazy Haskell universe.) All in all, there are better SoCs14.

See Also

It’s difficult to quantify how ‘useful’ a package is; it’s easier to punt and ask instead how ‘popular’ it is. There are a few different sources we can appeal to:

  1. Package downloads:

    1. Don Stewart provided, for Arch Linux, Arch download numbers (defunct)

    2. The Debian (and Ubuntu) Popularity Contest offers limited popularity data; eg. xmonad

    3. some 2006-2009 Hackage statistics are available by month & ranking (defunct); live Hackage statistics is an open bug report which will be closed by Matthew Gruen’s Hackage 2.0 (2010 SoC)

  2. Reverse dependencies can be examined several ways:

    1. Hackage Dependency Monitor

    2. cabal-query

    3. HackageOneFive

  3. Searching for mentions, blog posts, and unreleased packages elsewhere; key sites to search include:

    1. Haskell subreddit

    2. Github

    3. Google Code

    4. haskell.org, and specifically the Haskell wiki & mailing lists


  1. The Haskell ecosystem evolves fast, and strong static typing means that a package can quickly cease to be compilable if not maintained.↩︎

  2. From the 2011-02-11 Haskell-cafe thread, “Haskell Summers of Code retrospective (updated for 2010)”:

    There was some discussion of this on Reddit. Below is a slightly cleaned-up version of my comments there.

    I really appreciate this roundup. But I think the bar is set somewhat too high for success. A success in this framework seems to be a significant and exciting improvement for the entire Haskell community. And there have certainly been a number of those. But there are also projects that are well done, produce results that live on, but which aren’t immediately recognizable as awesome new things. Furthermore, GSoc explicitly lists a goal as inspiring young developers towards ongoing community involvement/open source development, and these notes don’t really take that into account.

    For example, I don’t know of any direct uptake of the code from the HaskellNet project, but the author did go on to write a small textbook on Haskell in Japanese. As another example, Roman (of Hpysics [sic]) has, as I understand it, been involved in a Russian language functional programming magazine.

    So I think there needs to be a slightly more granular scale that can capture some of these nuances. Perhaps something like the following:

    • ☐ Student completed (ie. got final payment)

    • ☐ Project found use (ie. as a lib has at least one consumer, or got merged into a broader codebase)

    • ☐ Project had significant impact (ie. wide use/noticeable impact)

    • ☐ Student continued to participate/make contributions to Haskell community

    A few more detailed comments about projects that weren’t necessarily slam dunks, but were at the least, in my estimation, modest successes:

    1. GHC-plugins – Not only was the work completed and does it stand a chance of being merged, but it explored the design space in an useful way for future GHC development, and was part of Max becoming more familiar with GHC internals. Since then he’s contributed a few very nice and useful patches to GHC, including, as I recall, the magnificent TupleSections extension.

    2. GHC refactoring – It seems unfair to classify work that was taken into the mainline as unsuccessful. The improvement weren’t large, but my understanding is that they were things that we wanted to happen for GHC, and that were quite time consuming because they were cross-cutting. So this wasn’t exciting work, but it was yeoman’s work helpful in taking the GHC API forward. It’s still messy, I’m given to understand, and it still breaks between releases, but it has an increasing number of clients lately, as witnessed by discussions on -cafe.

    3. Darcs performance – by the account of Eric Kow & other core darcs guys, the hashed-storage stuff led to large improvements (and not only in performance)[2] – the fact that there’s plenty more to be done shouldn’t be counted as a mark against it.

    Further criticisms by sclv from 2013:

    I no longer think these summaries are even modestly useful, because the judgement criteria are too harsh and too arbitrary. They reflect a bias towards “success” of a gsoc project as something measurable directly measurable in uptake and users within a relatively short span of time.

    GSoC projects are chosen, on the other hand, with an eye towards long-term payoff in Haskell infrastructure.

    The criteria that would yield us “high success” projects in the sense judged here would also yield us projects that weren’t very interesting, useful, or important.

    ↩︎
  3. For example, how long must a student ‘continue to participate/make contributions to Haskell community’? Spencer Janssen, a successful 2006 SoC student, went on to be one of the 2 main developers on the popular Xmonad window manager, but then wound down his Haskell contributions and stopped entirely ~2009 (much to my dismay as an Xmonad developer). Is he a success for SoC?↩︎

  4. I can hear the wankers in the peanut gallery - “Yeah, and it’s been buggy ever since!” Hush you. (Waern’s reply.)↩︎

  5. In the 2010 survey of Haskellers, 3% reported ever using Eclipse for Haskell programming. In the 2011 survey, 4% did.↩︎

  6. As of 2011-03-18, I have local copies of 8 repositories which seem to make use of the new syntax: angle, cabal, concurrent-extra, hashable, rrt, safeint, spatialIndex, unordered-containers, wai-app-static.↩︎

  7. A development that surprises me, since I had been under the impression that most GHC work ultimately winds up being scrapped or abandoned like Liskell or Mobile Haskell.↩︎

  8. 2011-12-11, Google+:

    “Regarding the parallel cabal-install patches - Duncan is concerned that my changes are too invasive. I hope to get them merged in during the next few months after some reworking (we’re currently discussing what needs to be done).”

    ↩︎
  9. From my conversation in #darcs with Eric Kow and other Darcs developers:

    < kowey> mornfall [Petr Ročkai] and I did discuss the proposal beforehand... one
             thing to clear up first of all is that this is very specifically about the primitive
             patch level and not a wider patch theory project
    < kowey> the difference being that it's easier to do in a SoC project
    < owst> Also, mornfall has the advantage of being very experienced with the Darcs
            code-base, and its concepts - he's not going to require time to "get used to it"
            so I'd argue he's certainly not the average SoC student...
    < kowey> I think mornfall has also put a good show of effort into thinking about
             (A) building off previous thinking on the matter (see his proposal),
             (B) fitting into the Darcs agenda -- particularly in aiming for this work to happen in mainline
                 with the help of recent refactors and also to result in some cleanups and
             (C) making the project telescope
    < gwern> owst: well, in a sense, that's a negative for the project as well as a positive implementation-wise
             - SoCs are in part about bringing new people into communities
    < kowey> by telescope I mean, have a sane ordering of can-do to would-be-awesome
    < Heffalump> gwern: yeah, though the Haskell mentors didn't see it that way
    < kowey> (the mental image being that you can collapse a telescope)
    < gwern> owst: I didn't mention that because I'm trying to not be unrelentingly negative,
             and because investigating backgrounds of everyone would require hours of work
    < kowey> (sorry, I misread and see now that gwern did catch that this was primpatch specific)
    < owst> gwern: in part, but not in full - they are ultimately also about "getting code written" for a project
            and that's certainly going to happen for mornfall's project!
    < gwern> owst: that's the same reason I don't also judge SoCs by whether the student continued on
             in the community - because it'd be too damn much work
    < owst> gwern: sure, I thought as much.
    < gwern> owst: even though the student's future work would probably flip a number of projects from failure
             to success and vice-versa
             (eg. what has Spencer Janssen been doing lately? how many of the SoC students you see on the page
             did that and have not
             been heard from since like Mun of Frag?)
    < gwern> so, I just judge on whether the code gets used a lot and whether it did something valuable
    < kowey> it's a project that has long-term value for Darcs
    < kowey> I think I agree with the last line of your prediction, "That might be a success by the
             Darcs's team's lights, but not in the sense I have been using in this history."
    < kowey> although I'm certainly hoping for something better in the middle bit: code that winds up in
             darcs mainline plus specifications on the wiki
    ↩︎
  10. I wrote mueval but tryhaskell.org uses a fork which takes expressions over a pipe as opposed to being a one-shot CLI tool.↩︎

  11. On Google+:

    Ah, ghcLiVE isn’t designed to have a server hosted, it’s designed to run outside a sandbox. Not that I would mark this successful myself, but mostly because it uses Yesod and cabal dependency hell means very few people ended up using it…I’d like to port ghclive to scotty, which has far fewer build dependencies, I think people would actually use it then

    ↩︎
  12. Guessing 50% simplifies the calculation, and isn’t too far off: Doing a quick sum of all the non-2012 successful/unsuccessful ratings, I get a chance of being successful at (4+6+2+3+3+4) / ((4+2+5+1+2+4) + (4+6+2+3+3+4)) = 0.55, which isn’t terribly different from a guess of 0.5.↩︎

  13. Many good and worthwhile projects suffer this fate because of their academic origins. There’s no reward for someone who creates a great technique or library and gets the wider community to adopt it as standard. As far as the Haskell community is concerned, one Don Stewart is worth more than a dozen of Oleg Kiselyov; Oleg’s work is mindblowingly awesome in both quantity and quality, everyone acknowledges, but how often does anyone actually use any of it?

    (Iteratees may be the exception; although there are somewhere upwards of 5 implementations as of 2010 by Oleg and others leading to a veritable Tower of Iteratee situation, the original iteratee has picked up 4 reverse dependencies, its most popular successor 33 at writing, and iteratees in general may one day become as widely used as bytestrings.)↩︎

  14. Liam O’Connor begs to differ on the value of a DPH or CUDA SoC, arguing they are definitely unnecessary and even a “terrible” idea.↩︎