Christopher Allan Webber firstname.lastname@example.org
Madison, United States
GNU MediaGoblin founder, former Creative Commons software engineer, python hacker, free software enthusiast, maker of weird drawings See: http://dustycloud.org/ (Any pronouns are okay.)
Creating bundles with guix pack
Cool blogpost on the Guix blog about "guix pack" which is able to make standalone, binary releases that run on any GNU/Linux distribution. It can also be used to generate reproducible Docker images!
I hate the term "killer app", but Guix is definitely Guile's "killer app".
Claes Wallin (韋嘉誠) shared this.
identi.ca is back!
now to return to your regularly scheduled programming of cwebber complaining about themselves
Claes Wallin (韋嘉誠) shared this.
"You can do anything you put your mind to!"
Unfortunately I can only seem to put my mind to the things that don't matter right now.
Downloading https://mirror.hydra.gnu.org/nar/5ym03w0sba95i6l5bdgr1ybm30iy4dq0-guile-next-2.2.0 (44.4MiB installed)... guile-next-2.2.0 1.6MiB/s 00:05 | 7.7MiB transferred
Guile 2.2 is out!
After six years in the making, Guile 2.2 is out! New register machine based VM, optimizations make it faster than ever, and concurrent programming is a dream to hack on in it.
This is a huge release in Guile's history. Guile is now a very fast Scheme (not the fastest, but very fast), thanks to many compiler optimizations. It also comes with a powerful asynchronous programming layer... I'm biased, but I think that Guile is soon to be one of the most pleasant places to write modern concurrent applications. The last six years have also seen growth for Guile, partly because of the rise of Guix (which is written in Guile, and has in many ways become its de-facto package manager). I hope, and believe, that we're going to see it grow a lot more, and Guile 2.2 opens a lot of doors to make that possible.
For those who are interested in reading more, I recommend reading Andy Wingo's personal blogpost on the subject, as well as the release announcement sent to the mailing list, which has many more details.
Onward and upward! Things are already looking good for Guile, and I suspect they're going to get better.
David Thompson talks about his first compiler addition in Guile, which made it into this released (unboxed floating point numbers!) and encourages interested hackers to check out Wingo's list of ways people can get involved in compiler hacking. Guile's a great place to do it!
Claes Wallin (韋嘉誠) likes this.
Guile 2.2, tomorrow
Tomorrow will be a tremendous day for Guile, as Guile 2.2, a HUGE release in the history of Guile, will come out.
In the meanwhile, you can read Andy Wingo's blogpost about it!
An anecdote on the lead-up to Common Lisp as a standard
In preparing for my The Lisp Machine and GNU talk, I've been reading The Evolution of Lisp. It's interesting.
In particular, I found this bit interesting, on the lead-up to Common Lisp. As context, today Scheme and Common Lisp are the "two big lisps" (as in, multiple implementations... though Scheme maybe has the same problems today as are joked of below), but at the time Interlisp and Maclisp were the two big lisps, but actually had tons of fractured, incompatible implementations.
2.10 Early Common Lisp
If there were no consolidation in the Lisp community at this point, Lisp might have died. ARPA was not interested in funding a variety of needlessly competing and gratuitously different Lisp projects. And there was no commercial arena--yet.
In April 1981, ARPA called a "Lisp Community Meeting", in which the implementation groups got together to discuss the future of Lisp. ARPA sponsored a lot of AI research, and their goal was to see what could be done to stem the tide of an increasingly diverse set of Lisp dialects in its research community.
The day before the ARPA meeting, part of the Interlisp community got together to discuss how to present a situation of a healthy Interlisp community on a variety of machines. The idea was to push the view of a standard language (Interlisp) and a standard environment existing on an ever-increasing number of different types of computers.
The MacLisp-descended groups came off in a way that can be best demonstrated with an anecdote. Each group stood up and presented where they were heading and why. Some questions arose about the ill-defined direction of the MacLisp community in contrast to the Interlisp community. Scott Fahlman said, "the MacLisp community is not in a state of chaos. It consists of four welldefined groups going in four well-defined directions." There was a moment's pause for the laughter to subside [Steele, 1982].
Oof! Reminds me a bit too much of trying to corrall different federation projects for federation standards...
Maybe a ray of hope though: Several people from the MacLisp side did manage to coordinate, and Common Lisp did result from it, which may be one of the most impressive language standardization efforts in history...
Another bit, revolving how to handle extensibility and standards:
One issue that came up early on is worth mentioning, because it is at the heart of one of the major attacks on Common Lisp, which was mounted during the ISO work on Lisp (see section 2.12). This is the issue of modularization, which had two aspects: (1) whether Common Lisp should be divided into a core language plus modules and (2) whether there should be a division into the so-called white, yellow, and red pages. These topics appear to have been blended in the discussion.
"White pages" refers to the manual proper, and anything that is in the white pages must be implemented somehow by a Lisp whose developers claim it is a Common Lisp. "Yellow pages" refers to implementation-independent packages that can be loaded in, for example, TRACE and scientific subroutine packages. The "red pages" were intended to describe implementation-dependent routines, such as device drivers.
Nevertheless, the first question is brought up by a direct reading of the issue: Division of Common Lisp into a core plus modules.
If this were taken to mean a proposal that would have partitioned the language into layers with a central layer and outer layers that depend on the inner ones, then Common Lisp could have been more easily subsetted, which would have led to obvious implementations on smaller machines. This would have satisfied the need to cheap, prolific implementations. This would also have made providing educational versions of the language more readily available. It also would have prevented the strong attack during the ISO meetings by Europe and by, to a lesser degree, Japan.
The response from influential members is revealing: "This seems weird. Motivate it. Maybe these modules are optional at the implementation's choice?" "Keeping things modular is a good goal, but don't expect to succeed completely." "The division only makes a little sense." [?; ?] The group focussed too much on the funny white-yellow-red distinction and not on the corelanguage/extended-language distinction. Had this gone differently, so would have the future of Common Lisp.
Note, this sounds not too unlike the decision to break r7rs into r7rs-small and r7rs-large, for anyone who knows/cares about that in Scheme-land...
Gosh, more good quotes
Carefully deferred was the decision regarding whether () was a symbol. Even though this decision was left until nearly the end of the decision process--causing people to emotionally accept Common Lisp and attach part of their egos to it--when the discussion came up, it was divisive. Symbolics threatened to withdraw from the group unless their position was accepted, and so it was. The salient paragraph from their message is as follows:
We have had some internal discussions about the T and NIL issues. If we were designing a completely new language, we would certainly rethink these, as well as the many other warts (or beauty marks) in Lisp. (We might not necessarily change them, but we would certainly rethink them.) However, the advantages to be gained by changing T and NIL now are quite small compared to the costs of conversion.
The only resolution to these issues that Symbolics can accept is to retain the status quo.
This shows that there are some issues, apparently trivial, that can have a profound influence on people.
FSF licensing team on GitHub ToS change
tl;dr: while there are other reasons to not host on GitHub, it doesn't seem incompatible with copyleft (though more clarification would be useful)
A problem I have: I can follow the mathematical concepts in papers, but not the notation. I don't know how to get past this. It means that I sometimes read programming papers and I'm like right, right, makes sense [EQUATION] mind blanks.
Has anyone else overcome this? How? I have a "my brain learns best by experimenting" mode, maybe I need to play more with... something?
Craig Maloney likes this.Show all 8 replies
I found certain mathematical notations easier to follow once I learned Haskell. In many cases, I only have to squint to ascii-ize symbols to their Haskell equivilants. Also mathy variable naming stuff like
a'is idiomatic in Haskell.
Christopher Allan Webber likes this.
Put in a column near the equation, the meaning of each variable.
Then use differentcolors to highlight each variable (both in the explanation column, and in the equations).
Draw circles/strikes around/on part of equations that make sense by temselves, or represent a concept, or are transformed into other thing in the next step (in a similar way as teacher does in the blackboard while they explain things).
Maybe look for videos explaining the theorems etc (since they visually show all these techniques, I guess).
HTHI have the same "thought lazyness" problem, with mind starting to get white noize when reading scientific equations or also with musical scores (that I passed both years to learn at academy and school... )
I think what worked was to use them not only in a passive way (reading) but active (writing) ... and it is also the same with code I think.
so maybe trying yourself to put some concept of yours into equations with LaTex could help ?
Christopher Allan Webber likes this.
Emacs running wireworld
Don't ask me what I should have been working on today though...
Okay... it happened. I implemented Wireworld in GNU Emacs. Ascii art circuits ahoy!
Diane Trout likes this.
@George Standish Well, luckily! I just wrote about what wireworld is, but to reduce clicks, here it is on wikipedia and here's a whole computer built in wireworld. It's cellular automata, like Conway's Game of Life, but more circuit'y.
Luckily, if you're vi-accustomed, there's Spacemacs, which I hear is excellent.
George Standish likes this.
Me IRLWhy get things done on the computer I have when I could get nothing done on an imaginary computer?
Dana likes this.
..... ..... .... ....
""""||"""" """""" """", """||"""
'''''''' ||' ||' '' ''''''
"""""" ||"" ||""" """"
'''' ||' ||' \' ''
"" ''"""" """ '" ""
******* **** **** ***
*** *** ** ** ** ** ***
*** ** ** ** ** ***
*** *** ** ** ** ** *** ***
******* **** **** ******
uıɐɾ ʞ ʇɐɯɐs shared this.
Lisp Machine and GNU @ LP 2017
I'm talking at LibrePlanet 2017 on "The Lisp Machine and GNU":
You may have heard of Stallman and the printer, but much of free software's genesis involves the battle over the soul of the lisp machine. We'll trace Lisp and the Lisp Machine's roots, from its genesis in early hacker culture and the AI labs, to the split that (largely) pushed RMS to found GNU, through its role within and without the free software community. Why did GNU become a "Not Unix", and why not a lisp machine? What about the role of Lisp within GNU, with projects like Emacs, Guile, and Guix? For those who are new to Lisp, there will be a mini-tutorial.
Some interesting emails from really old lisp mailing lists
The lispnews list is a bit hard to read, but unveils some key lisp ideas one after another in their earliest state; fascinating stuff. First reference to unwind-protect, and the details of backquote/quasiquote are being worked out here. (EDIT: more on backquote's history.)
Here's some interesting bits: David Moon (who worked on Common Lisp, helped develop Emacs, and was one of the original developers of the the lisp machine) mentioning Common Lisp and the CADR switching to it; rms (who was a maintainer of lisp software at the time) not being so pleased about it, or the way it was announced, and Guy L. Steele (who was editing the Common Lisp standard) replying. Later RMS seems to be investigating how to make it work together.
Sadly it seems that debate was discouraged on that list, and I don't see the BUG-LISPM list around anywhere.
You probably noticed that I was cherry-picking reading emails by RMS. It's no coincidence... I knew this was coming up, and here it is:
Here also is where Symbolics started to move out of the AI lab and where they announced that MIT may use their software, but may not distribute it outside the lab... which is, according to my understanding, one of the major factors frustrating rms and leading to the founding of GNU. A quote from that email:
This software is provided to MIT under the terms of the Lisp System License Agreement between Symbolics and MIT. MIT's license to use this software is non-transferable. This means that the world loads, microloads, sources, etc. provided by Symbolics may not be distributed outside of MIT without the written permission of Symbolics.
There it is, folks! And here's another user, Martin Connor, raising concerns about what the Symbolics agreement will mean. That person seems to be taking it well. But guess who isn't? Okay, you already guessed RMS, and were right. Presumably a lot of argument about this was happening on the BUG-LISPM list. I guess it's not important, but here is an amusing back and forth. I wonder if anyone has access to the BUG-LISPM or BUG-LISPM-MIT lists still?
Notably RMS wants to clarify that his work doesn't go to Lisp Machines Incorporated specifically, either, even though he was more okay with them.
I'm giving a talk at LibrePlanet 2017 on the Lisp Machine and GNU, which explains why I'm reading all this! Okay, well maybe I would have read it anyway.
Beyond Burger unsolicited review
Nobody's paying me for this; I found out from @John Sullivan that the Beyond Burger is being sold in stores (has been for a while). In fact I hadn't heard of this one... I've been waiting with anticipation to try the Impossible Foods burger but that isn't available yet. I checked, and the Beyond Burger already was at a Whole Foods near me (they said they've been stocking it for some time) so I ran out and cooked it up for dinner. Here's my impressions:
- I'd say that was def the most beef-burger-like burger I've had. Not perfect, but 98% of the way there. There were a few times where I completely forgot it was a veggie burger.
- Burger looks a lot, but not exactly, like raw beef when in the package. A cool but unnecessary touch. It looks a little bit squishier / more homogenous, but it's pretty close.
- It definitely sizzled like real meat, and there was more "juice" in the pan than there was oil that I initially put into it... I'm not used to that with frying veg food in a pan, I'm used to it all being absorbed, so I was surprised.
- I even ran a small piece of bread through the pan and it tasted kinda like what I remember beef's residue tasting like, not quite but kinda.
- Came out brown on top and bottom, but middle remained mostly the appearance it had when I started cooking it, so it looks very rare.
- Texture and taste: not exactly there but so damn close it doesn't matter. At moments when eating it, staring off into space, I forgot that I was vegetarian. zapping back into reality I caught the last 2% of the not quite there, but even then it was so damn good it didn't matter.
- Biggest downside: the packaging felt super excessive. it's a lot of cardboard and plastic surface matter for just two patties. That said, I realized afterwards the the ground beef I grew up with came in styrofoam-lined bottoms, so maybe not so bad.
- Carbon footprint on the product side itself, it's supposed to be a ton lower; that's one of the large selling points. Which is what made the packaging surprising.
- Morgan, as a non-vegetarian, said it was surprisingly close.
- Price: also pretty good I thought, for what it was. It was $3 I think for the package of two patties, but maybe it was $3.50 or $4; divide by two that's somewhere between $1.50 and $2 per patty. A lot more expensive than a Morningstar or Boca patty, but not bad at all compared to say, eating out, so it could be a reasonable "now and then" meal.
- Ingredients seem not so unreasonable.
Overall I'm impressed, won't be buying all the time but I definitely will occasionally.
I wonder when eventually we'll be able to get the "heaping mound of 'raw' un-beef" you can get with real meat in the store.
In the meanwhile, I'm looking forward to the Impossible Foods burger, hoping it crosses that last 2%! They're growing blood-like heme in a vat for that one (whereas the Beyond Beef burger gets its "bleeding" appearance from beets). Sounds good! I'm convinced the planet can't afford meat consumption as it is (not to mention ethical issues related to modern animal harvesting itself); a lazy alternative for the everyperson is good for every person.
Kevin Everets shared this.