Archive for the ‘Scheme’ Category

Mastering Higher Order Programming Techniques with SICP Chapter 1

September 22, 2007

Section 1.1 was deceptively simple, but things rapidly got heavy in section 1.2. The process of tree recursion was exceedingly difficult to visualize. Exercise 1.13 asked us to “use induction” to prove something. Plugging through exercise 1.20 was exceedingly painful back when I was working on my own. The march of technology added some complications to the later problems that caused me to dread going on… and the Miller-Rabin problem was completely mystifying. (I hacked out a Common Lisp version of the algorithm described on Wikipedia, but even that exercise failed to give me the insight necessary to understand the question. I haven’t been that befuddled since that problem from PAIP chapter 2….) Collectively, these experiences might lead one to believe that working through SICP was a lost cause….

A couple of things can really improve your chances of getting through it, though. One thing is to definitely watch the videos— they really make things interesting and more accessible. Another thing is just to have a friend or two that you can bounce things off of. “I tried x on problem y… is that what you think they wanted there? Also, are you getting something like z on problem w?” A few encouragements from a peer can really help you focus.

Continuing on into section 1.3 things got much easier. The lecture was outstanding. It is probably the definitive sales pitch for higher order thinking. If you ever wondered what people mean when they talk about “higher order procedures,” this is the place to go. The problems for the most part were straight-forward or about medium level. A lot of them push you to alter or imitate clear examples, so they are pretty “fair” as exercises go. Others require a modicum of persistence and experimentation, but are still pretty straight forward. I got through most of them with only a couple of minor questions. (Simpson’s rule in 1.29 didn’t work for me as it was advertised. The instruction to “show that the Golden Mean is a fixed-point transformation” is perhaps beyond my math skills. I wasn’t sure how to test my answer in 1.44, and I’m not quite sure where to go with 1.46.) But if the goal here was to get me to understand when and how to implement functions that take functions as arguments and that return a function for their value… then I’m there. They push the level of abstraction… and then push it some more… and then push it until I want the merry-go-round to stop. I used to think that mapcar was pretty darn cool. Now I have an entire new mental apparatus that I can apply regardless of the language or platform.

So I’ve got the bulk of the problems from the first chapter completed and it’s all culminated into a deeper understanding of how to apply and implement functional style abstractions. I’m really pleased with how far they’ve brought me in the text, but I begin to get anxious: I want to go code something… and I’m afraid of how far into the stratosphere they’re going to push my mind. I wanted at first to learn more powerful methods of creating general solutions… but really, I only wanted to get maybe half an order of magnitude better as a programmer. I had no idea that they would push my mind so far beyond its comfort zone. I had no idea that I even had a mental comfort zone! The experience gives me a new respect for MIT… and I’m glad I have a chance to experience a bit of what they put their computer science students through. (I want to also give my thanks to the SICP study group for providing me the inspiration and the encouragement to continue working through this book!)

Two minor things stand out for me, as far as mental processes go. One is the “faith oriented programming” that Sussman advocates in the lecture: if you’re not sure how to code a complex routine, just assume the hard parts are already done. If you can put a name on what’s missing, you’ll be able to easily address it later. Thankfully, DrScheme doesn’t auto-highlight “syntax errors” in this case! This worked surprisingly well for me when I went back and reworked the Pascal’s Triangle problem from section 1.2. Also, in section 1.3 they formalize the “guess-fudge” method that helped me do well in so many math classes in the past. The iterative approaches in the section mirror my own problem solving techniques: if you’re not sure of the answer, guess… then make small improvements or adjustments until it’s good enough. Combining these techniques gives you the key to solving recursion problems: start with a rough outline the covers the trivial cases… then “guess-fudge” the missing piece into existence with a function or two. This works for many of the problems in the text.

Anyways, if you got overwhelmed in section 1.2, just hang tight… it gets easier. Or maybe you get smarter. Something. They keep the pressure on, but they are very steady in section 1.3. It’s a lot of work– almost like learning to program all over again– but everything they make you do is part of their perfect plan to transform the way you think about algorithms. I can’t for the life of me imagine why so many people have complained about Scheme: it’s an exceeding beautiful language that gets out of the way so that you can express raw thought in its most elegant form. It’s hard to believe how much can be accomplished with only a handful primitives and parentheses….

Report this ad

For those that want to compare notes, here’s my code for exercises 1.29 to 1.33, 1.35 to 1.39, and 1.40 to 1.46. Also, here’s my algorithm for the Miller-Rabin test based on the instrictions from this Wikipedia entry.

Advertisements
Report this ad
Advertisements
Report this ad

SICP Exercises 1.22-24 are Somewhat Out of Date

September 13, 2007

(This is the point where I broke down and finally started using Scheme.  Going without my Emacs shortcuts in DrScheme is painful, but the IDE-like environment is probably a lot more friendly to the raw recruits.  I’m as of yet too lazy to russle up SLIME48/Scheme48 for now and just want for finish some problems….)

Starting with exercise 1.22 in SICP, we are given code with a reference to “runtime.” This does not exist in the MzScheme language I was using under DrScheme…. Thanks to a quick tip from Sridhar Ratnakumar, I didn’t have to bang my head against the wall figuring this out. By defining runtime thusly, (define (runtime) (current-milliseconds)), the code from SICP will run as-is.

One of the first things we have to do here is to write a procedure search-for-primes that “checks the primality of consecutive odd integers in a specified range.” If you think in Basic, you’ll probably want a FOR-NEXT loop for this. Or if you’re really hip, you might want a DO-UNTIL loop. I looked at some options for doing that, but I have to admit that a recursively defined iterative function is much simpler. Much simpler than trying to keep track of yet-another-language’s control construct syntax….

Exercise 1.22 asks us to compare the runtimes of our prime checker for the ranges of 1000 to 1,000,000. On my machine, these came back as taking 0 milleseconds– apparently my box is a little faster than the ones running back in the eighties. I started my comparison at 100,000,000… and did a quick munge to make the code only give output when we actually hit a prime. We can see the output of my code below. Each time my range goes up by a factor of 10, we expect the time it takes to find the prime to increase by about SQRT(10) or 3.1622. Is that what’s happening here?  (Here’s my code file if you want to try this at home.)

> (search-for-primes 100000000 100)

100000007 *** 0
100000037 *** 0
100000039 *** 15
100000049 *** 0
100000073 *** 16
100000081 *** 0
> (search-for-primes 1000000000 100)

1000000007 *** 16
1000000009 *** 16
1000000021 *** 31
1000000033 *** 16
1000000087 *** 16
1000000093 *** 15
1000000097 *** 32
> (search-for-primes 10000000000 100)

10000000019 *** 141
10000000033 *** 313
10000000061 *** 140
10000000069 *** 156
10000000097 *** 312
> (search-for-primes 100000000000 100)

100000000003 *** 656
100000000019 *** 641
100000000057 *** 656
100000000063 *** 656
100000000069 *** 656
100000000073 *** 656
100000000091 *** 672
> (search-for-primes 1000000000000 100)

1000000000039 *** 2656
1000000000061 *** 2281
1000000000063 *** 2124
1000000000091 *** 2124
> (sqrt 10)
3.1622776601683795
> (* 16 (sqrt 10))
50.59644256269407
> (* 32 (sqrt 10))
101.19288512538814
> (* 140 (sqrt 10))
442.71887242357315
> (* 313 (sqrt 10))
989.7929076327027
> (* 656 (sqrt 10))
2074.454145070457

Report this ad

Just eyeballing it here, it appears that as we get further down the list, the closer the time factor gets to SQRT(10).

Let’s try that last one a few more times to see if there’s any variation:

> (search-for-primes 1000000000000 100)

1000000000039 *** 1968
1000000000061 *** 1937
1000000000063 *** 1969
1000000000091 *** 1922
> (search-for-primes 1000000000000 100)

1000000000039 *** 1937
1000000000061 *** 1968
1000000000063 *** 1937
1000000000091 *** 1984

> (search-for-primes 1000000000000 100)

1000000000039 *** 1953
1000000000061 *** 1999
1000000000063 *** 1968
1000000000091 *** 1999

Okay… so we’re pushing 2000 now for these primes. Go up another order of magnitude and we should get around 6000 or so.
> (search-for-primes 10000000000000 100)

10000000000037 *** 6280
10000000000051 *** 6217
10000000000099 *** 6311
> (search-for-primes 10000000000000 100)

10000000000037 *** 6326
10000000000051 *** 6670
10000000000099 *** 6295

Allright! Spot on. Now let’s try it with the changes suggested in Excercise 1-23.

> (search-for-primes 10000000000000 100)

10000000000037 *** 3374
10000000000051 *** 3546
10000000000099 *** 3389
> (search-for-primes 10000000000000 100)

10000000000037 *** 3375
10000000000051 *** 3358
10000000000099 *** 3531

And it looks like we’re not quite halving the time it takes to solve the problems now…. Nice.

Moving on to Exercise 1-24… we end up with problems. The random number generator expects numbers just within a certain range– and because our machine is running so fast, we’ve bumped things up to some *really* large numbers….

random: expects argument of type <exact integer in [1, 2147483647]>; given 9999999999999

Another problem we have is that the compiler doesn’t grok our true and false constants. We need to change the code in start-prime-test to use fast-prime? to use #t and #f instead.

Let’s run our code without fast-prime? for the upper bound of our random number generator.

Report this ad

> (search-for-primes 2147483547 100)

2147483549 *** 32
2147483563 *** 31
2147483579 *** 31
2147483587 *** 31
2147483629 *** 32
2147483647 *** 31

And now let’s try it with fast-prime? with the number of Fermat tests being run 100 times for each prime.

> (search-for-primes 2147483547 100)

2147483549 *** 0
2147483563 *** 16
2147483579 *** 15
2147483587 *** 0
2147483629 *** 16
2147483647 *** 16

Even with that insane amount of tests, we can identify the primes in either half the time or instantaneously. Let’s shift the number of Fermat tests down to 10 and see if there’s much difference.

> (search-for-primes 2147483547 100)

2147483549 *** 0
2147483563 *** 0
2147483579 *** 0
2147483587 *** 0
2147483629 *** 0
2147483647 *** 0

We get the same results… but all are practically instantaneous now.

I don’t think I have enough information to give a really good answer to Exercise 1.24, but it’s clear that log(n) growth is the way to go if you can manage it. The fact that you have to choose the number of Fermat tests to run adds some additional complexity to the question… and the combination of a fast computer with a limited random number generator makes it difficult to determine what’s going on here experimentally.

Computer Science is Not About Computers… and tasting a bit of Syntactic Sugar

September 11, 2007

I finally got around to watching the first of the SICP lectures. This first third of it is a real treat for those with a mathematical bent. I summarize Sussman’s Abelson’s argument here:

Computer Science is not a science.

Computer Science is not about computers.

Computer Science is not even “real.”

Geometry is not about Surveying or Surveying instruments.

Geometry is about an using axioms and logic to provide a framework for exploring declarative knowledge– identifying and articulating what is true.

Similarly, Computer Science is really about imperative knowledge– articulating how to do things.

Computer Science deals with idealized components. There’s not all that much difference between what you can build and what you can imagine. The constraints imposed in building large software systems are not physical constraints, but rather the constraints are the limitations imposed by your own mind.

Computer Science is therefore an abstract form of engineering.

The second portion of the lecture dealt with an overview of the course. SICP is going to focus on programming techniques for controlling complexity. Sussman made something as simple as Black Box Abstraction sound really interesting: the key to that was in how he highlighted that we’re not just going to build black boxes, but that we’re also also going to build black boxes that return other black boxes… and by passing black boxes as parameters, we’ll be able to obtain another order of magnitude of abstraction in our programming style that is far beyond what non-functional programmers tend to think in. He spent less time selling us on the Conventional Interfaces section, though he did mention that this part would hit on OOP somewhat. The final portion of the course on Making New Languages sounded the most exciting– but his brief description of Metalinguistic Abstraction sounded (at this point, mind you) pointlessly geeky and pedantic. I must as of yet lack some sort of illumination, because I suspect that my mind should be blown away by the mere hint of the magic of Eval and Apply….

In the last section of the lecture he went over a bare bones introduction to Lisp. He was using Scheme– and even though I have gotten DrScheme up and running with almost no hassle, I still recoil at the thought of spending any time away from Common Lisp, Slime, and Emacs. Nevertheless, the Scheme approach is very clean and slightly more intuitive than the traditional Common Lisp way of doing things. Just for fun I wrote a macro that implements some of the functionality of Scheme’s define operator:

CL-USER> (macroexpand-1 ‘(define (sq-a x) (* x x)))
(DEFUN SQ-A (X) (* X X))
T

CL-USER> (macroexpand-1 ‘(define sq-b (lambda (x) (* x x))))
(DEFUN SQ-B (X) ((LAMBDA (X) (* X X)) X))
T

CL-USER> (define (sq-a x) (* x x))
SQ-A
CL-USER> (sq-a 11)
121
CL-USER> (define sq-b (lambda (x) (* x x)))
SQ-B
CL-USER> (sq-b 12)
144

Note that the second case there was somewhat more tricky to implement. I tried several different ways of doing it with setf and defun before getting a version that would work identically to the first case. The key (or possibly the ‘hack’) was to define the function with the arguments from the lambda… and then splice those arguments in with ,@ after the lambda expression. I think I had this same difficulty when I was trying to understand Michael Harrison’s example from the Little Schemer a few weeks ago– Scheme and Common Lisp seem to work in subtly different manners in this case.  But working through this example has helped me understand the differences a little better.

Report this ad

Anyways, the most shocking thing in the lecture for me was when Sussman blithely tossed out the phrase “syntactic sugar.” (He defined it as “somewhat more convenient surface forms for typing something.”) I usually associate this phrase with smarty pants blogger types that really want to sound cool. Yeah, like all you need to do to sound cool is to criticize Java a little and throw this phrase around!! Now that I’ve seen it turn up here, I guess I’ll have to get down off my high horse the next time my hackles are raised over this….

Master Fundamental Hacking Principles with Scheme

August 21, 2007

“If you don’t understand the principles, or if you are the kind of person who wants to be given a cookbook of what to do rather than to think creatively, or if you only want to work on problems that are pretty much like the problem you worked on last time, then this approach will not work for you. There are other approaches that will be more reproducible for a limited range of simple problems, but there is no better way than SICP to learn how to address the truly hard problems.” — Peter Norvig

Michael Harrison is looking to kick off an online SICP study group that will work through the MIT’s open courseware materials from September 5 to December 12.  If you get stuck working through the difficult Lisp books on your own (or if you get distracted with other topics because you don’t have a study schedule) then you might benifit by joining such a group and having folks to bounce ideas off of.

Of course, you could always try working through the book by youself and post lame blog entries about how it’s so hard to get through it on your own… but come on… you’re not only going to irritate the online Lisp community with your inane ramblings, but you’re not going to get the feedback you need to truly develop your skills that way, either!  (Get a clue… I mean… why don’t you wait until you have something to say before you start posting?!)  So drop Michael a line and join the fun!


:)