×
all 7 comments

[–]sclv 8 points9 points  (1 child)

I really appreciate this roundup. But I think the bar is set somewhat too high for success. A success in this framework seems to be a significant and exciting improvement for the entire Haskell community. And there have certainly been a number of those. But there are also projects that are well done, produce results that live on, but which aren't immediately recognizable as awesome new things.

So I think there needs to be a slightly more granular scale that can capture that. Among the projects that I think weren't slam dunks but were modest successes:

GHC-plugins -- Not only was the work completed and does it stand a chance of being merged, but it explored the design space in a useful way for future GHC development, and was part of Max becoming more familiar with GHC internals. Since then he's contributed a few very nice and useful patches to GHC, including, as I recall, the magnificant TupleSections extension.

GHC refactoring -- It seems unfair to classify work that was taken into the mainline as unsuccessful. The improvement weren't large, but my understanding is that they were things that we wanted to happen for GHC, and that were quite time consuming because they were cross-cutting. So this wasn't exciting work, but it was yeoman's work helpful in taking the GHC API forward. It's still messy, I'm given to understand, and it still breaks between releases, but it has an increasing number of clients lately, as witnessed by discussions on -cafe.

Darcs performance -- by the account of Eric Kow & other core darcs guys, the hashed-storage stuff led to large improvements (and not only in performance) (http://blog.darcs.net/2010/11/coming-in-darcs-28-read-only-support.html) -- the fact that there's plenty more to be done shouldn't be counted as a mark against it.

Also we should take further community involvement into account.

GSoc lists some of its goals explicitly as such: * Get more open source code created and released for the benefit of all; * Inspire young developers to begin participating in open source development; * Help open source projects identify and bring in new developers and committers;

For example, I don't know of any direct uptake of the code from the HaskellNet project, but the author did go on to write a small textbook on Haskell in Japanese. As another example, Roman (of Hpysics) has, as I understand it, been involved in a Russian language functional programming magazine.

[–]sclv 4 points5 points  (0 children)

So maybe a grid is the right thing:

[ ] Student completed (i.e. got final payment)
[ ] Project found use (i.e. as a lib has at least one consumer, or got merged into a broader codebase)
[ ] Project had significant impact (i.e. wide use/noticable impact)
[ ] Student continued to participate/make contributions to Haskell community

[–][deleted] 6 points7 points  (1 child)

Nice one for going to all the trouble to make that out. I almost didn't upvote because you said my proposal was bad, but then I did because you were right. It might be an idea to have a push for more/better proposals though since the two fora in use are a bit stale.

[–]gwern[S] 2 points3 points  (0 children)

I almost didn't upvote because you said my proposal was bad, but then I did because you were right.

"Admitting error clears the score and proves you wiser than before."

It might be an idea to have a push for more/better proposals though since the two fora in use are a bit stale.

This page is part of my push effort. :) But seriously, I wonder how many new proposals there could be? There are only a few slots, and still a lot of good ideas left.

[–]tora 0 points1 point  (1 child)

Having the summary of who did what is good, and it's interesting to see what's gone on. And the effort is appreciated.

But: "The only one that bothers me is the EclipseFP project." Seriously? While some of your measures of success or failure are a bit flimsy (reverse dependencies on hackage for instance, though at least that's quantifiable)[*], personally biasing what you're about to present just feels wrong. OTOH, I strongly disagree with your assessment of the EclipseFP project (having, working on and improving an eclipse plugin would be wonderful for Haskell in the long run), which could be why it irked me so much.

Reading on...perhaps I misunderstood, did you mean it was a bad idea for a GSOC project just because of its scale (as it turned out to be)?, or Eclipse + Haskell is really a bad idea in general?

[*] I appreciate the discussion on ranking popularity at the end which mitigates this complaint.

[–]JPMoresmau 1 point2 points  (0 children)

Well, EclipseFP is alive and kicking, and we even have some users! So maybe the GSOC project didn't achieve its goal, but at least it contributed to keep the project alive, and more: the current architecture of EclipseFP is still based on the GSOC work (using the scion library and the GHC API, JSON communications between Eclipse and the Haskell code, etc.).

[–]kamatsu 0 points1 point  (0 children)

"Add NVIDIA CUDA backend for Data Parallel Haskell": DPH is rarely used;

Because it's not mature yet, yes. Once it's fully developed it should see substantial pickup in scientific computing and other areas. I know that a bunch of computer vision researchers at NICTA have decided to take it up.

a CUDA backend would be even more rarely utilized; CUDA has a reputation for being difficult to coax performance out of; and difficulties would likely be exacerbated by the usual Haskell issues with space usage & laziness. (DPH/CUDA use unboxed strict data, but there are interface issues with the rest of the boxed lazy Haskell universe.)

This is basically disingenuous. You state how DPH uses unboxed strict data, and then say that it would have "interface issues" to transform between boxed lazy data and unboxed strict data as a way to justify your complaint that it would have problems with it. For starters, once completed, transforming between the two should be as simple as calling a function, I hardly think that's a problem - CUDA would work just fine with unboxed strict data, and you can transform between them easily. That said, CUDA can be finnicky, but it's substantially easier to write accelerate code than to write CUDA code. Accelerate might be a bit less performant, but that is more than made up for in productivity gains.

All in all, there are many better SoCs.

I think this is an unnecessary SoC seeing as a DPH-like backend has already been produced for the Accelerate library. It should be easy to adapt to DPH once everything's a bit more mature. But, when trying to come up with some impartial analysis, please don't let your own personal opinions about the DPH project colour your judgement.