Log in

entries friends calendar profile Elf Sternberg's Pendorwright Projects Previous Previous
Elf M. Sternberg

Dear Gods, I’m not even sure why I should even bother, but the C++ experiments I’ve conducted recently were so much fun I’ve decided to put TOXIC (Terabytes of XML, Indexed and Compressed) and Twilight (A basic GraphDB with local indexing and aggressive caching, using RDF triples as index keys) back into my projects list.  I’m not even sure why.  It doesn’t seem like a very smart thing to distract me with yet more shiny.  But these would be fun, fun shiny.

Leave a comment

Recently, while I was at Beer && Coding, one of the others came in with a problem that they’d been given by a potential employer. They’d hoped that we’d be able to help finish it. Nobody did in the time allotted, but I got pretty far with my Scheme version. However, Scheme wasn’t in the list of legal target languages.

The problem stated was:

Given a number (assume base 10) less than 10,000, write a program in

C++ that will reverse the digits of that number, calculate the

original number to the power of the new number, and print it out.

You may not use Boost, GMP, or any library other than that provided

by the C++ Standard Library.

I don’t know C++. I haven’t ever written C++ profesionally, and I haven’t actually looked at C++ since 1999 or so. As a professional, I’m aware of what’s going on in the zeitgeist, and at my job at Spiral Genetics I interacted with two very talented C++ developers a lot, so I was aware of things like the emerging C++ Standard Library and RAII and so forth. I didn’t know what they meant, but I had heard of them. I’ve also been aware of the emerging standards in C++11 and C++14, mostly thanks to Slashdot, Hacker News, and their ilk (don’t read the comments, don’t ever read the comments), so I’d heard about auto_ptr and C++11 lambdas and the like.

It took about an hour of googling to get up to speed on things like namespaces, containers, for_each, lambdas, and the like. I really like the new unique_ptr construction. That’s very nice.

My basic solution degrades to 4th-grade mathematics: Break the multiplicand up into a list of single digits, multiply each digit with the multiplier, then redistribute the values up the tens, hundreds, etc., etc. This solution is not particularly fast or space-efficient, but it has the virtue of being comprehensible by any ten-year-old.

As usual, I’ve provided a test suite, as well as a pair of utility functions for converting the list to a string, or an unsigned long. The latter only works with very small results. The executable, “cheapgmp”, works as specified in the problem statement.

The source code is, of course, available on Github.

Leave a comment
Since I have an X-Box 360 now, I've been playing a lot of old games that can be had in the Gamespot "Used" bin for less than five bucks a pop. I played through Doom 3, which I've praised before, and Doom 3: Resurrection of Evil (RoE), which I panned. The difference between the two is rather straightforward: Doom 3 had an underlying story, whereas RoE was a rather dull first-person monster hunt that happened to be set in a place vaguely resembling the Mars base of Doom 3.

The Lost Mission is much better than Resurrection of Evil. It consists of six Mars maps from the original Doom 3 development cycle, heavily edited, plus two new maps set in Hell. It's rather short; Doom 3 took twelve hours to play through, whereas Lost Mission will take only 2. The (new) premise is that you're a soldier from Bravo Team, the squad you were supposed to meet up with in Doom 3 but never quite did, who finds himself alone in a part of the base with yet another secret teleporter project, who has to shut down the teleporter lest the demons find it and use it to get to Earth.

Gameplay is much better developed than RoE. Since the player is assumed to be a skilled marine, and the game authors know you've already played at least one other Doom 3 game, it doesn't mess around. Monsters that in earlier games are reserved until far later in the cycle appear much earlier.

Monster spoiler.Collapse )

In all three games, you find people's cell phones lying around with memos, emails, voice recordings, and security keys for getting through locked doors. It's a fairly common trope. In Doom 3, those phones also told stories: arguments among crew members, basic kvetching about life on Mars, crappy food, missing home. Many phones had messages from other people, showing an interconnected community. RoE's writers didn't understand that; there a sense of "used furniture" about the messages, which further the game but don't give you a sense that Mars was a place where real people lived before disaster struck. Lost Mission is much better. It's a very short mission, so the underlying themes are around a tragedy that happened to a technician named "Jenny," and how her death affected the positioning of tools, weapons, and medical supplies, all of which is dealt with surprising subtlety and effectiveness. Sadly, you never find Jenny's phone. Security's efforts to deal with the early effects of exposure to Hell's madness, and a cultural conflict between the science crew and the construction crew, are also handled deftly. This sensitivity is what RoE desperately needed, but failed to have.

Anyway, obviously I enjoyed this game much more than Resurrection of Evil. Dear ID, more of this please. Doom 3 was a lot of things: A great story, a horror game, a combat game, a jumping-at-shadows survival game. Doom 4 so far looks like a boring splatterfest; please make it better.

Tags: , ,

Leave a comment
So, since I live in Seattle, I finally decided to try Washington State's newest legal crop. Friends of mine assured me that it would be great, that it would unlock interesting details, that I would enjoy the experience and it would make me feel better in the end.

Sadly, none of that was true. I tried it several times, sometimes in a hybrid blend, sometimes in an entirely Sativa blend. I had no real interest in Indica. I always used edibles because I'm reluctant to put any sort of crap into my lungs, and I slowly upped, in successive experiments over several weekends, from five to fifteen to even twenty milligrams, never mixing with alcohol because that is one of the strongest cofactors that leads to brain dysfunction, and I like my brain.

You know what? It didn't do a whole lot for me. It made me feel icky, and a little bit like someone put a plumber's vise around my head and tightened it down a bit. I did get a buzz. I never got the munchies. It made me slightly disoriented and I had to think harder to walk up and down stairs.

Who wants that?

There was one interesting effect. With enough sativa, I would see geometric patterns when I closed my eyes. The best one was millions of 5s, the numeral, all in a lovely serif font with a matte silver tinting on a dark grey background, slowly moving toward me like that old Star Trek screensaver. Interesting, but, you know, there's always the Internet for stuff like that.

The other interesting effect is adults only... Collapse )

I'm kinda disappointed. It just seems like a lot of discomfort for not a lot of benefit. I'm glad other people like it. Part of my disappointment is that, unlike wine, it's not a depressant and it's much lower calorie, so I'd have been ecstatic if it worked for me. It doesn't. I think I'll stick to wine. It's much more pleasant.

Current Mood: annoyed annoyed

1 comment or Leave a comment

No, I don’t mean regular expressions.  I mean that thing that you see in “modern” programming languages where you’ll have two functions of the same name that take different arguments do different things  We use it all the time.  The plus symbol, “+“, often has two separate meanings, and is often processed by two different operations:

add(number, number) -> mathematical addition
add(string, string) -> textual concatenantion

Now, most programming languages provide this differentiation by convention.  C++ is fairly unusual in that it permits it explicitly; you can use the same name for a function with different inputs (the arguments), and the C++ compiler will connect any calls to that function name correctly depending upon the arguments you provide at compile time.

But the modern languages like Haskell do this explicitly and without any special needs.  You can re-use function names all day, and as long as every function has unique input and output types, Haskell will associate the right functions in the right way.  (I think Haskell is annoying in that it doesn’t even matter what order you make these declarations in, but some people seem to like it.)

It turns out this behavior is, for programming, really old.  Like, 1960’s-era old.  The λ-calculus that McCarthy turned into Lisp back in 1960 described it.  McCarthy didn’t know how to turn pattern matching into a general-purpose feature, so we ended up with “if” statements and conditionals and all the rest to decide these things by hand, but the original math seemed to require automatic pattern matching.  Take the “sequence” operation: This is when a function has a list of things to do, several statements in a row, and returns the value of the last expression.  The math for this reads:

Ε[[π]]ρκσ = Ε[[π]]ρκσ

Ε[[π π+]]ρκσ = (Ε[[π]]ρ λεσ1.(Ε+[[π+]]ρκσ1) σ

The left side of the equation denotes “the meaning of…”  In this case, line one is “The meaning of a single statement is the meaning of a single statement after we’ve taken into account the heap and the stack.”  And line two is “The meaning of a single statement with more statements following is the meaning of the first statement, which we then discard to derive the meaning of the rest of the statements, all the while taking into account the heap and the stack.”  There’s actually more to that, as the environment and memory can change from statement to statement, but that’s already handled by our understanding of what ρ and σ mean.

In a common language like python or javascript, we would have an ‘if’ statement to say “If this is a single statement we’re done and return, else process the first and iterate.”   But in Haskell you could just write those explicitly, and it would work; the compiler knows exactly what you mean, and does the right thing, because there’s no ambiguity there at all.

I just wish I could write them in English, which is my native tongue, and not have to translate from Greek in my head.

Leave a comment

Chapter 4 of Lisp In Small Pieces introduces side effects. It also introduces a (excruciatingly simplified) taste of what managing memory feels like, with a monotonically increasing integer representing memory we’ve requested, and a corresponding identity in the environment.

More interesting is the abandonment of the Object-Oriented approach in Chapter 3 for a completely functional approach in Chapter 4. Everything is a function with enclosure. When you want a number, you have to send the message (number sValue) to extract it. You can get typing information with (object sType). The “sValue” and “sNumber” are, in my code, simple Javascript objects of type Symbol, around which I’m trying to formalize symbols and quotations, so that going forward it’s easy to distinguish a Symbol as a unique type– doing so may remove an entire class of difficulty in the read() function.

Chapter 4 was supposed to also teach about memory management and variable “boxing,” but in fact it only contains a very cheap pair of stacks: one for the environment, the key of which is a name and the value of which is a number, and one for memory, the key of which is the number, and the value of which is the object: a symbol, a number, a boolean function or an arbitrary function. It makes a strong correspondence between this Lisp and the λ-calculus, and even shows how, with a few small extensions, the λ-calculus can accommodate assignments and similar side effects, although not necessarily externalities, like I/O. Extending this to a real memory management system is, I hope, not entirely left up to the reader as an exercise.

I’m still trying to figure out how to write this monoidically, such that I can “lift” (see: The Elevated World) all of my functions out of the “standard Lisp lists and things” world to the “lisp lists and things encapsulated with metadata about compilation,” which would enable me to add debugging, source code, and sourcemap information to the resultant product. Maybe I need to do a few more monoid/monad tutorials, the more hands-on ones, to wrap my head around lifting.

The source code for Chapter 4 of Lisp In Small Pieces, Coffeescript Edition is, as always, available on Github.

Chapter 5 basically reiterates Chapter 4, only it does so entirely in Greek. I’m only half-kidding; it’s mostly an attempt to formally described the interpreter covered in Chapter 4 using the formal language of an extended λ-calculus. It doesn’t have an interpreter of its own, except in a purely theoretical, head-in-the-clouds description of a λ-calculus-derived language written mostly in Greek symbols.  So there’s not going to be any source code associated with it.

Leave a comment
I rarely rate apps, and almost never rave about them. But I have to rave about Sparse, the RSS Newsreader for Android, because compared to LinkedIn's rapidly decaying Pulse Newsreader, Sparse is a miracle of modern software design.

The biggest difference comes from the mediation Pulse provides that Sparse wisely ignores. Pulse makes everything you want to read flow through LinkedIn's servers; your feeds are registered with LinkedIn and maintained by LinkedIn. Pulse uses this to extract information from your feeds like headline images, which it can then analyze for "interestingness" (an algorithm I have passing familiarity with) and provide you with a pretty UI with headline images sized and cropped for your phone. But in doing so, if Pulse loses track of a feed, you lose out on content. This has been the case with several feeds I follow; right now, my Programming Zeitgeist feed has been showing me the "Let's Encrypt Schedule" press release from June for over two months. All those bells and whistles slow Pulse down terribly.

Sparse collects everything on your phone. You don't get pretty headlines. You just get headlines. On a per-feed basis, you get a list of articles to read, and nothing more. Clicking on an article takes you to the article. That's it. In terms of access to on-line newsfeeds, Sparse is the epitome of the maxim "Great design is not when there's no more left to put in; great design is when there's no more left to take out." Sparse allows you to specify on a feed-by-feed basis whether or not the feed should update over WiFi only, which is a data quota lifesaver.

Pulse achieves uniformity of feeds across devices by storing your feedlist on their servers, so every device you use has the same feedlist. But it doesn't track what you've read, so every device has to be "caught up" manually, and Pulse doesn't track well what you've read on a given device. Sparse achieves uniformity of feeds the old-fashioned way: export your file, the file most readers use to store your list of feeds, and import it to another RSS reader. Sparse does a much better job of keeping track of what you've read, and can be configured to float read stories to the bottom of the list.

Sparse isn't perfect; it has some CSS display problems when feeds have color changes encoded into their text (such as sometimes happens with blockquote), and its display of video is rough, but you can always click the "(eye)" symbol and pull the article up in a native browser. Sparse doesn't do any image processing, so its cache can take up quite a bit of room on your phone. But compared to LinkedIn's Pulse it is a miracle of great design for a modern RSS Newsreader for Android.

Tags: ,
Current Mood: satisfied satisfied

Leave a comment
The Annihilation Score is a solid addition to the Laundry series, and it's nice to get away from Bob and look at how someone else handles the business. In this case, we get Bob's wife, Mo. In the Laundry series, magic is real and accrues around worlds dense with minds capable of symbolic thought. The book follows her as she attempts to handle an outbreak of Superheroes; as our world become more and more imbued with thaumaturgic powers, people will interpret their acquisition of those powers in culturally familiar ways, and in our current culture that way is: superheroes! Mutant powers! From there, one of Stross's well-thought mysteries unravels until the final reveal. Stross has commentary to make along the way, about superheroes, about bureaucracy, about policing and its traditions, that are trenchant and obvious all at the same time.

The book has some-- not loose ends, but simply frayed ends. It feels like there were a lot of ideas of of which the a freighted train of plot caroms off as it hurtles down to an ending that the reader could see coming from a million miles away, but is very much in keeping with Mo's character and, sadly, sets her up for a tragic fall in one of the coming books. Her enthusiasm for some of the good things in life can blind her to unfolding disaster even as we see it. Part of me wants to insist that the story has an idiot plot, that if we can see the setup as its building, surely she can as well.

Still, we get a lot of fun for the ride. Mo is forced to work with two of Bob's ex's; she and Bob deal with a lot of angst and anguish now that Bob has become the host for (or perhaps simply has become) an ancient but unspeakable ally of humanity known as The Eater of Souls. One's a mermaid, the other's a vampire. We get an honest-to-God, very British version of Superman. And we get a deep lesson in what beat police work is really about, and how it can systemically fail at the extremes. (Contrasting the notion that Stross illustrates one extreme, the USA the other, is left as an exercise to the reader.)

Tags: ,
Current Mood: satisfied satisfied

Leave a comment

In Chapter 3 of Lisp In Small Pieces, you are encouraged to write a lisp interpreter using continuations. Here’s what I learned as I implemented the chapter 3 examples in Javascript:

A continuation is a representation of the incomplete state of the program. For any evaluation being done, the continuation at that moment holds everything else that must be accomplished in order for the program to finish.

It sounds, odd, but it’s actually an easily understandable feature of any development environment with first-class functions and working closure– both of which Javascript has. Imagine the instruction print 4 * 2. At the very top, that breaks into “Evaluate four times two then print the result.” The part after ‘then’ is the final continuation.

So it breaks down to:

  • Wrap the “print something” as a FinalContinuation. Pass that continuation to the evaluate function with a pointer to the rest of the AST (Abstract Syntax Tree) to evaluate.

  • Evaluate what the ‘*’ means. This retrieves a native function that expects two numbers and returns a product. Wrap the FinalContinuation with a new EvFunc continuation.

  • Evaluate what ‘4’ means. Continue on to…

  • Evaluate what ‘2’ means. Continue on to…

  • Apply the function. This continuation has been built by the previous three. Inside the source code, you’ll see EvFunc, which calls EvArgs and passes in Apply, a continuation to run after the arguments have been evaluated. Each instance of “EvArgs” creates as “GatherArgs”, which appends the discovered arg into a list, and then passes the ‘Apply’ continuation forward. When the EvArgs discover no more arguments, it finally calls the ‘Apply’ continuation, which performs the primitive operation ‘4 * 2′, and then the Apply calls its continuation, which is the FinalContinuation, which prints the value passed by Apply.

  • As that was the FinalContinuation, the program terminates.

Rather than stepping through the program line by line, we step through continuation by continuation. Each line of a program becomes its own continuation, encapsulating the current state of the system, as well as all possible future states– at least abstractly. For a while, this was really hard for me to understand, but now that I’ve done the exercise, I see that the notion “the rest of the program” doesn’t mean the rest of the syntax tree all laid out and ready to trigger, but instead means “What we’ve done before,” (the current continuation), “What we want to achieve and what we haven’t processed yet that needs doing to acheive it,” (the continuation we’re about to build), and “How we bridge the two after processing this node of the AST” (the current evaluation).

Each continuation becomes a handler of its own momentary responsibility in the interpretation process and a wrapper around all the future responsibilities that have thus far been discovered during interpretation.

That seemed like a lot of work, and I wasn’t entirely sure of the benefits of it. On the other hand, continuations do allow for some features of programming that are much harder without them: blocks of code with early return, catch/throw, and finally.

Blocks with early return were easy. When you enter a block, an interpreter creats a new continuation: What do to after the block. Then it creates a continuation: What to do inside the block. It then resumes the inside one. If the block ends, or if return is used, the results are delivered as the value to be processed by the after continuation. Blocks with labels look up the stack where the prior continuations were stored, and when it finds a match, resumes the program using that continuation.

This is important: the stack’s purpose, which up to this point was strictly to maintain the identity and value of variables in a given scope, has a new responsibility: maintaining the integrity of the interpretation process itself. There is a new thing on the stack that is not a Symbol/Value pair! Or rather, it is a Symbol/Value pair, but not one you can dereference and manipulate on your own. Eventually, we learn that the classic call/cc (call with current continuation) does allow you to access it and run it whenever you want.

Catch/throw is more complicated. Throw statements can appear anywhere in the source code and abstract syntax tree (AST), and aren’t necessarily containerized within a catch block. When they trigger, a corresponding catch continuation has to be looked up and triggered. A catchLookup occurs up the collection of continuations (not the environment, note) until it gets a match or the program dies with an uncaught throw.

A protect is even more important. In Javascript and Python, protect is usually called ‘finally’, and it means the collection of statements that must be executed within the current continuation-managing context. So when the body of the protected statement ends in any way, we look up the collection of current continuations (remember, they’re wrapping each other up to the BottomCont, creating a sort of stack-that’s-not-a-stack) until we find the continuation in which the protect was created, then continue with evaluating the final body, then continue on with execution. Because any continuation can be wrapped this way, the base continuation has to be modified, and both the ‘block’ and ‘catch’ handlers have to also be modified to scan for protected evaluations.

This is fun. And kinda deep. It goes way more into detail than your usual “let’s write an interpreter!” that you find on the Internet, even if one of my side-excursions was to write the λisperator interpreter.

Even this code is twonky. Symbols are not handled well. There’s an arbitrary distinction between initial symbols and created symbols. It’s not as bad as the λisperator interpreter where the continuation designations were kinda arbitrary.

The oddest thing about this code, and it’s endemic in all of these “write an interpreter in Scheme” textbooks, is that it rides hard on top of the scheme garbage collection. When the environment stack pops down, or when a continuation is executed and unwraps, the wrapper is simply discarded, and the assumption is that garbage collection just works at the interpreter’s interpreter level. You start to notice it a lot as you go through the exercise.

2 comments or Leave a comment

We all have superpowers. One of my friends once said that my superpower was the ability to look at any piece of software and, within minutes, be able to describe with frightening accuracy the underlying data structures and fundamental internal APIs used to manipulate it. The other day I ran headlong into Airtable, and the underlying data structure was pretty obvious. The performance is amazing, but the underlying data is, well, simple.

A spreadsheet is a Cartesian grid of cells. But the interaction between the cells is a graph, in which some cells contain values, and other cells contain expressions that are dependent upon those values. A hypersheet extends this notion to include the idea that some cells contain, or are driven by, data stored elsewhere (and usually stored using an RDF triple).

So: An Airtable is nothing more than a graph database, backed by Postgres, in which triggers written in a convenient subset of a popular language (let’s call it Javascript) can be written by the community to facilitate the filtering and processing of hypertext content on a cell-by-cell basis.

The trick of a spreadsheet is in extending it conveniently. But there’s already a book that teaches you how to do cutting, copying, and pasting of content. It’s not a fantastic book; it’s too centered on its own interests to abstract spreadsheet notions well, and it’s too concerned with C# (although the F# asides are fascinating). It does cover the issue of relative-and-absolute ranges and how to cut and paste them without losing track of what they refer to, so that’s good enough.

Seriously: That’s about it. I’ve linked to every paper and book you need, every idea, to re-implement Airtable. All it takes is some engineering. And let’s be honest here: if you get the infrastructure for Airtable right, you get how to write Wunderlist, Trello, and Evernote for free.

Leave a comment