rAS^ \r t*L !.XC );",Cm@",A,CB`",k,HCKM[r" P,CR`""\?GxGP"#7,2,G$:"> ,G}@"C,,G"+q"Gt,G7I:"|,G>l[Z"p ,G>n 6",3,G?"K,B,G?$,GNm{@",v, "E,%y"9,2,I6=";,[,IL@",S,t.XI4["܁,,I&="bv,E,I>ez"m,K>0",K[= 7",0,Kc$",MM9"-,M'@"6=gGx<"6,JCxM?X:"B ,{,M?X:$D , .hM?@"DG,$,O "E],O?X@"E,k,Q0"S5,,mm"Sc,S7+{"V,D,S:{8"W,S:{8$W,Y$`"\]jYF @"^+,Y>~"_n"X[^"q/,,[X8"w&S,[|",N,as",],[`"R [>%J7",*,[?.9 ",ve" ,],ve$&`,|,]Vd+r"1,_|I ",m,-,_2%`",_C% ": ,c,aJ%|R"U`,,aW",,5 "S,e%w"XM,?.<"h,j,\",g", *,g$`"o?, 8,g&=Lx"pu%,g&tP"g.g2:"B,!F.ʛ"s,!u,ʛ$[,",gV="&gfXgbDt"0,"Q,)"7,iJ 9 "GC,kR"He,k_+z"J,m:="R ,This directory contains excerpts from the archives of the LISP-FORUM mailing list. Different generations of the same FN1 are in fact different files and should not be reaped. Maintained by KRONJ and KMP. Date: 11 December 1980 18:07-EST From: George J. Carrette Re: More LOOP fan mail. A macsyma system programmer used loop in an includef file: (LOOP FOR X IN PRELUDE-FILES COLLECT X COLLECT (GET X 'VERSION)) However, we made him remove it to save us the slowdown of the loading of loop. He grudingly replaced it with this: (DO ((X PRELUDE-FILES (CDR X)) (RESULT NIL)) ((NULL X) (NREVERSE RESULT)) (PUSH (CAR X) RESULT) (PUSH (GET (CAR X) 'VERSION) RESULT)) Ok, uglier than the loop, maybe. But there is a nice mapping way to do it: (MAPCAN #'(LAMBDA (X) `(,X ,(GET X 'VERSION))) PRELUDE-FILES) Here are the results from compiling these three examples: (in GJC;LOOPT >) Example | code size (pdp10). ------------------------------- LOOP 29. DO 23. MAP 11. including SUBR generated from the LAMBDA. MAP 27. with MAPEX T to give open compilation. The LOOP macro produces almost three times as much machine code as the map in this example. Observations? Using LOOP avoided using back-quote. GLS: Doesn't an optimizing compiler have a tougher time with a PROG construct and GO-TO's than with a regularized mapping construct? Maybe we need a shorter name for lambda, so people won't be afraid to type it? How do the proposed NIL mapping extensions compare with LOOP? -gjc  Date: 13 December 1980 16:28-EST From: David Chapman To: GJC at MIT-MC, LISP-FORUM at MIT-MC Re: More LOOP fan mail. Maybe we need a shorter name for lambda, so people won't be afraid to type it? I have experimented with two solutions: defining the lambda character to be a lisp magic read character that turns into 'lambda; and defining it to be a zwei command that inserts #'(lambda ()) at point and puts point inside the null arglist. Both have made me more willing to use lambda expressions, though they have quite different feels.  Date: 13 December 1980 16:58-EST From: Kent M. Pitman To: ZVONA at MIT-AI cc: GJC at MIT-MC, LISP-FORUM at MIT-MC Re: This really has very little to do with LOOP Actually, on the LispM I was surprised initially and continue to be so that the 1-char lambda symbol is not an acceptable char in place of LAMBDA in the car of an anonymous function description. Or better yet, why not have lambda (), do the following: (DEFMACRO  (&REST BODY) '#'(LAMBDA ,@BODY)) so that it is also self-quoting as well? Date: 14 AUG 1980 1709-EDT From: KMP at MIT-MC (Kent M. Pitman) I have created a new mailing list called LISP-FORUM which is the approximate union of BUG-LISPM, NIL, and BUG-LISP. It exists only as LISP-FORUM@MC -- additions/deletions should be done on MC's mailing list file. Let's use this to discuss cruft like what follows in paragraph 2. Paragraph 2: More on what was discussed in Paragraph 1 ------------------------------------------------------ Is there anyone that is opposed to having #/char or #\char return a character object (on systems with no char objects, a fixnum; on systems with char objects, a NON-FIXNUM char object)? JONL tells me that the Lisp Machine people are the ones that have to be convinced in order for it to be put in. Can we hear feedback from anyone that objects to this? ie, people who rely intimately on #/ returning a fixnum and whose code will be expected to be transportable and whose code will be broken by such a change? I so far haven't heard anyone but JONL voice a negative opinion on this and he only to say that the LispM people would never go along with it. As JAR points out, no changes to code would be needed except to code that wants to be compatible with new systems that offer char objects. -kmp  Date: 14 August 1980 20:49-EDT From: Kent M. Pitman To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC No, #/ would *not* generate symbols. Only in languages with datatype for character objects (ie, in NIL) would there be a difference. -kmp ps to all: A LISP-FORUM mailing list now lives on both MC and AI. The transcripts will continue to live only on MC in MC:LSPMAI;LISP FORUM  Date: 1 OCT 1980 2011-EDT From: RMS at MIT-AI (Richard M. Stallman) If you are sending a message to LISP-FORUM, please don't mail it to BUG-LISP or BUG-LISPM. We can assume that anyone who is interested in flaming about how Lisp ought to work is on LISP-FORUM. There is probably nobody on BUG-LISP or BUG-LISPM who is not on LISP-FORUM as well, but if there is, he has probably decided he is not interested in these issues.  Date: 21 October 1980 16:48-EDT From: Daniel L. Weinreb Sender: dlw at CADR2 at MIT-AI To: GJC at MIT-MC, lisp-forum at MIT-MC I, too, find it hard to belive you are serious about GOSUBs. We should definitely have internal procedures. By the way, the whole idea of LISP-FORUM is so that you don't have to CC the letter to BUG-LISP and NIL as well. Just LISP-FORUM will suffice.  Date: 14 Dec 1980 2044-MST From: Griss at UTAH-20 (Martin.Griss) Re: Add me to mailing list Please add @utah-20 to LISp-forum. I am working on a LISP written in LISP and compiled to Dec-20, VAX, etc. Hope to get extended addressing version on 20 working soon. M  Date: 31 JAN 1981 1729-EST From: RMS at MIT-AI (Richard M. Stallman) Several people have been replying to LISP-FORUM with their reactions to messages which I sent to something other than LISP-FORUM. Aside from the fact that I sent them to someplace else for a reason, this must confuse everyone on LISP-FORUM. So if you think you want to move the discussion from whatever mailing list I used to LISP-FORUM, you should at least enclose a copy of the message you are responding to.  Date: 22 April 1981 15:58-EST From: Kent M. Pitman Re: New Maclisp mailing lists BUG-LISP has been revamped into a set of more constrained mailing lists. For further info, see the file MC: LSPMAIL; MACLISP INFO -kmp Date: 10 Nov 1980 (Monday) 0512-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) To: lisp-forum at MIT-AI, nil at MIT-MC Re: Improving the LISP working environment by taking a lesson from... APL! I think that there are several problems inherent in the design of LISP and its environment that have been neatly avoided in APL. I think particularly of the following: 1- When a programmer writes a function for his system it is "eaten" by LISP. That is, it becomes a part of the workspace as if it had been there all along. This is "neat" but painful for large system design because it means that you need to carry around a notebook of all your functions in order to remember what you have there. In APL, you can ask the system to list the names of all your functions or variables. This is a simple, but powerful idea and should exist in LISP without having to go trapsing thru the oblist. Also, the ability to get these names into a data structure (admittedly somewhat klugely implemented in APL) permits the writing of workspace documentor programs and the like. 2- The CONTINUE workspace feature of APL permits a user to logout and come back an hour later, the next day, etc with his environment intact as he left it (this also happens if, for some reason, the system craps out). This is simple enough; a common checkpoint name that is automatically restored when LISP starts up if it exists. 3- LISP has this one real unfortunate feature; it is too flexible and too simple simultaneously. This is the reason that it is soo hard to transport LISP systems from one machine to the next. "Basic LISP" is so trivial that noone can write anything in it. Thus, every impl- ementation has devised their own set of cover functions thus making programs written there completely incompatible with the rest of the universe. The flexibility of LISP makes this a real easy task. APL is not quite as flexible (you cannot easily add "primatives" to the main loaded system but the primative set is sufficiently rich that you usually do not need to do so. Thus APL is, probably, one of the most transportable languages in the world [don't thumb your nose at transporatbility -- it is quite important if you ever expect to see any of your work outside of MIT]. The MAPCAR, etc functions were a half-hearted attempt to correct the problem of the simplicity of LISP -- useful and more powerful than CAR CDR... I don't know quite what to do about this one -- since the NIL people are writing a "new LISP" they might be able to make use of this commentary I think that the MacLISP users are already fried (sorry). Some of the problems sighted above are due to the general verbosity of LISP compared to APL; in particular, there is much greater proliferation of functions in a large LISP system than one in APL because of the difficuly of writing COMPACT one liners [just thow it in another function]. Also, note that I am not saying "use APL" rather, learn from the popularity and usability of the APL environment. LISP is better than APL for what we do with it but I think that the environment needs work (and has for a long time) and that it is about time someone did something about some of the other problems in LISP as well. NIL is the place to strike out at these. -- Jeff Shrager  Date: 10 Nov 1980 10:41 PST From: Deutsch at PARC-MAXC To: SHRAGE at WHARTON (Jeffrey Shrager) cc: lisp-forum at MIT-AI, nil at MIT-MC Re: Improving the LISP working environment by taking a lesson from... 1- In Interlisp, the Masterscope facility lets you easily find out what all your functions, variables, etc. names are. In fact, it includes a powerful query facility for discovering relationships in your program, much better than anything in any other language or system I know. It is callable both from the terminal (with pleasant English-like syntax) or from programs. 2- The SYSOUT facility in Interlisp makes it very cheap (in time) to create checkpoints which you can resume from later. Many Interlisp users prefer the APL "workspace" style, keeping a SYSOUT around to work in for days or weeks. 3- It's not simply a matter of "cover functions", different Lisp systems have chosen to develop themselves in quite different directions. For example, Interlisp has put tremendous emphasis on managing the programming process through history retention, building up a data base of your code, etc., while MACLisp has placed more emphasis on efficient compilation and certain kinds of system simplicity. APL hasn't grown in ANY of these directions as far as I know. To quote Joel Moses (approximately): "APL is a diamond -- you can't add anything to it, even another diamond, without ruining its beauty. Lisp is a ball of mud: you can keep adding more and more mud to it and its nature doesn't change." I think it is precisely because Lisp is really almost like an assembly language for a particularly interesting machine that it has been used to do such a tremendous variety of things.  Date: 10 November 1980 14:37-EST From: Daniel L. Weinreb Sender: dlw at CADR17 at MIT-AI To: LISP-FORUM at MIT-AI, SHRAGE at WHARTON-10 Re: Improving the LISP working environment by taking a lesson from... I generally agree with what Peter Deutsch said. I would like to point out some further things, regarding how these problems are handled in the Lisp Machine programming environment. The way files are used and the way the editor is used keeps user functions conveniently grouped together and separate from system functions; they don't get "lost". In APL, if I load up a large package of useful functions written by someone else, or by myself, then are those "sytem" or "user"? What if I am working on more than one thing at a time? An important part of the Lisp Machine design philosophy has been to avoid the "system"/"user" dichotomy, so that there can be programs that are shared by many users that are not necessarily in the "system". Indeed, nobody ever saves away Lisp Machine workspaces, and it might be nice if you could. But they are just too big to save in a brute-force way, and too complex (and probably still too big) to try to save in a clever way. Actually, I have never found much need for this myself. You are quite wrong about "cover functions". Lisps are incompatible because they are truly different; there are things you can do in one that you cannot do in another. Lisp has been growing and improving at a tremendously high rate at MIT for the last several years; I have no desire to stop this improvement by trying to crystalize a diamond a la APL. I am completely in favor of the verbosity of Lisp. Lisp programs that I am familiar with usually have the right amount of subroutinization. APL is terrible this way. I think Lisp has something to learn from APL, but it is mostly in convenient handling of arrays that Lisp suffers, not in the programming environment.  Date: 10 Nov 1980 (Monday) 1708-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) Re: Of diamonds and mud: Let me begin by clarifying my position on one count; I stated that LISP (at the time I had MacLISP in mind) could use some enhancement in the workspace environment. It is fine by me if the ideas for the enhancement came from the CADR or InterLISP. Being an APL hacker (as well as a LISP hacker) I simply chose APL as my framework. Granted, you can do much more interesting things than are done by APL [as pointed out by Peter Deutsch]. This only supports the argument that there is a lot that could be done with the workspace in NIL. I assume that the existing facilities (CADR, InterLISP, etc.) are being actively studied. I stand, however, by my statement that the LISP language has the problem of uncontrolled extensibility. Take, for example, InterLISP (since Peter did bring it up) -- OSHA should ban the use of InterLISP on the grounds that one could get a hernia lifting the manual. I have, in my wallet, a card approx. 3 by 4 inches, which on both sides describes the entire APL language in almost as vivid detail as the InterLISP manual describes that language. Again, I a} not0pushing APL, }erely dramatizing my point. If LISP is really, as you say, an assembly language for a particularly interesting machine, then I suppose that heavy duty support hacking is ok -- that is, you are building an enviroment to run ONLY ON THAT MACHINE!!!! Is that really what you want to do? Never move your work to other machines? This is a major distinction between assembly languages and higher level languages... people do not worry about their krufty- ness because they'll never see the light of transportation. Ever tried to move an InterLISP program (I have) -- you can basically come in with wrenches and a truck and take the KL with it in order to preserve the environment required to support it. Is this really the way to design a programming language? Maybe it is. As another example, take Peter's point about Masterscope -- a very good point indeed [ie, it does what I had said LISP needed and more] but try to find that out without having had someone tell you this! To carry the Moses analogy a bit...[firstly, I simply must defend APL in saying that there are a whole lot of bright people taking the time to add only diamonds to the APL language -- adding one speck of mud DOES mess it up] "APL is like a diamond -- you can add only diamonds to it or you will ruin its beauty. LISP is like a ball of mud, a bit more dirt wont marr it any; it is a mess already." Another hack or two into the language will not bother it or anyone. Another hacking in InterLISP won't even make it to the surface of SRI (or whoever) for a long time most likely. We have the opportunity in NIL to mold the ball of mud. I do not propose CRYSTALIZING it as suggested by Peter. Rather clean off some of the dirt and harden the supporting rock foundation -- it is a very good foundation under all that mud. [I fully expect to get blasted over this -- blast away]  Date: 10 November 1980 19:22-EST From: Kent M. Pitman To: LISP-FORUM at MIT-MC, SHRAGE at WHARTON-10 Re: Muddy diamonds Actually, I disagree with the idea of having functions in the system which do workspace saving, etc. It is much more important that you have the primitives you need to build such functions. Maclisp will certainly provide you with that. Fanciness like saving workspaces, etc., are something that everyone has personal preferences about. Indeed -- the concept is unthinkable to me on a conventional machine -- I'd rather not waste my address space on it. (I suspect that attitude will change with the advent of larger address space machines; I certainly don't begrudge the LispM for having Zwei in its address space ...) My point is that quite often such packages might want to be just publicly accessible libraries which people can load if they like or not if they don't. And such public libraries should be well-advertised so that people don't waste their effort writing personal hacks to do things that someone has already done better. On this last point, Maclisp is somewhat lacking as there has been a marked lack of documentation -- hopefully that will change one of these days ... And I don't believe in the Interlisp philosophy of adding function after function and package after package, growing seemingly without bound. Pretty soon things do get muddy if you do things that way. Some day we'll have a lot of experience in how environment saving, building, and manipulating can best be done -- and at that time it might rightly be said to be a good thing that it be a property of a language -- but right now I don't think anyone knows the right way, so I'd just as soon retain some distinction between the Lisp language itself and the environment provided by a given Lisp implementation or invocation. APL tries to define things to rigidly and I think it might be a serious error to follow suit for the time being ...  Date: 10 Nov 1980 (Monday) 2158-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) To: kmp at MIT-MC cc: lisp-forum at MIT-AI Re: I agree that it should not be as rigidly defined as APL  Date: 10 November 1980 23:48-EST From: Daniel L. Weinreb Sender: dlw at CADR7 at MIT-AI To: LISP-FORUM at MIT-AI, SHRAGE at WHARTON-10 Re: Of diamonds and mud: Your comment about the size of the Interlisp manual reflects a common misconception. I think exactly the opposite of what you do about that. If I were to have to program in a system all of whose functionality could be described on a 3 by 4 inch card, then I would have to reinvent the wheel every time I did anything. The primary, number one reason why the Lisp Machine is an extremely superior programming environment is that there is a wide set of packages for doing common things already availible. When I want to build something, all the primitives are there, to a much greater extent than any other environment I have ever seen. This has nothing to do with portability; you could implement Lisp Machine Lisp on another computer and everything would work. Of course, it would be hard to implement the whole language, because it is so big, but its size is, in a sense, its most important feature -- I don't want to use some tiny, easy-to-transport language that doesn't do me any good. If this environment is only availibile on the Lisp Machine, then I will do all my work on Lisp Machines rather than putting up with an inferior environment simply to gain transportability. Also, your slurs about "another hack or two in the language" are unfounded. You say "we have the opportunity in NIL to mold the ball of mud" -- do you think you are the first person to think of this? The NIL design team has spent immense effort trying to mold, codify, and make consistent and straightforward a wide range of useful features; a great deal of effort has been spent on this. When we in the Lisp Machine group introduce a new feature, we spend more time on making it clean, consistent, and well-thought-out than we do on coding and debugging put together. I'll let someone else speak for Interlisp; while their philosphy differs from ours on many points, I'm sure this is not one of them.  Date: 11 Nov 1980 (Tuesday) 0049-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) Re: LISP machine diamonds Firstly, I do not mean anything that I say as a "slur". If anyone takes it as such I appologize for the miswording (or misreading). You have a point in the richness arguement. Although I think that you will find less reinvention of wheels than you think (perhaps some betterment). Also, I assumed that the NIL people were being careful in their design I just wanted to be sure that they were taking precautions that do not permit their carefully molded ball of mud to go back to any sloppiness that it might have picked up over the years. Although I have not experienced a LISP machine it sounds to be a very well designed system. NIL will, of course, follow suit in this regard. Who knows, maybe we will end up with a diamond after all.  Date: 11 Nov 1980 0114-EST From: JoSH Re: lisp vs apl We had a lisp/apl bboard war locally half a year or so ago; it consisted mostly of quips, rather than flames as in the present case, but occasioned some thoughts on the subject nonetheless. APL in its early (APL\360) form was much more a diamond than it is now: modern APLs with quad-variables and -functions and filesystems and so forth are actually as much balls of mud as many lisps--you might think of lambda-calulus as a diamond too. For my money the major differences between the languages are not hard to find: syntax and semantics. APL is good for problems that can be solved in a day or less, and indeed in APL you can do some pretty amazing things in a day or less. I would sit down at a bare APL with problems that would require signi- ficant programming in LISP. On the other hand, for my money, APL has some awkwardnesses when it comes to building large systems. My analogy, instead of diamonds and mudballs, would be of a set of snap-lock panels and struts as opposed to a set of bricks and mortar. I wouldn't want to be caught in the rain with hod and trowel in hand, but at the same time I'll use them for more permanent structures. These comments are relative only; I'd take either LISP or APL over most other languages/programming systems I've seen. And both are painfuly out of date compared with what we ought to have by now.  Date: 11 Nov 1980 (Tuesday) 0300-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) To: josh at RUTGERS, lisp-forum at MIT-AI Re: APL and LISP 1- Agreed that LISP and APL have it all over most other languages in COMMON practice (thus excluding a lot of things like CLUE, ALICE, etc) 2- I'm actually sorry that I mentioned APL to start with. Although a comparison of APL and LISP might be interesting in its own right, this is not the point of our flaming. Rather, we are arguing a subject that I might call, in the recent style of SIGPLAN: "Uncontrolled extensibilty considered harmful." I think that we actiually all agree to a very large extent about APL vs LISP and InterLISP vs MacLISP vs.... It is the philosophy of system building that is being argued. The simplistic solution is to provide infinite flexibility and have the user (programmer) do with it as he sees fit. If he wants to be careful he(she) will. Of course, there is always the potential, no matter how carefully the environment is designed in this respect, to make a mess if you try. I started out arguing that LISP has lost (perhaps, was never intended to have) transportability or commonality of anykind (above the basic list datastructure) due to its laxness in this area. If this is the way the designers want it then fine but I think that that is a mistake. What I had wanted to discuss was not incorporation of APL into LISP (although there is, again, some interest there) but corrections to the LISP environment as defined by MacLISP that would help in this problem. Some LISP implementations, CADR for one, have already done a lot of this. Fine!  Date: 11 November 1980 03:28-EST From: Alan Bawden To: SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC Re: APL and LISP Date: 11 Nov 1980 (Tuesday) 0300-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) ... Rather, we are arguing a subject that I might call, in the recent style of SIGPLAN: "Uncontrolled extensibilty considered harmful." Indeed, this argument seems very much like the "GOTO considered harmfull" arguments to me. A language without GOTO is a cripple, despite any "elegance" you gain by removing it. Whatever it is that you propose to do to "control" the extensibility of Lisp can only harm it. Just like with GOTO, if you want a truely powerfull language, then you have to leave in the parts that are capable of producing uglyness.  Date: 11 November 1980 03:27-EST From: Richard M. Stallman To: SHRAGE at WHARTON-10, Deutsch at PARC-MAXC cc: LISP-FORUM at MIT-AI Re: Improving the LISP working environment by taking a lesson from... Date: 10 Nov 1980 10:41 PST From: Deutsch at PARC-MAXC In-reply-to: SHRAGE's message of 10 Nov 1980 (Monday) 0512-EDT 1- In Interlisp, the Masterscope facility lets you easily find out what all your functions, variables, etc. names are. EMACS makes it easy to find out what Lisp functions you have in a file, and where they are called from, even among several files. 2- The SYSOUT facility in Interlisp makes it very cheap (in time) to create checkpoints which you can resume from later. Many Interlisp users prefer the APL "workspace" style, keeping a SYSOUT around to work in for days or weeks. Maclisp also allows an environment to be saved, just as easily, but there is no application for it in our normal way of working. We keep our programs in text files, not in Lisp environments. When programs are kept in files, it is usually not useful to save a checkpoint. 3- It's not simply a matter of "cover functions", different Lisp systems have chosen to develop themselves in quite different directions. For example, Interlisp has put tremendous emphasis on managing the programming process through history retention, building up a data base of your code, etc., while MACLisp has placed more emphasis on efficient compilation and certain kinds of system simplicity. This is unfair and misleading. The absense of tools for programming in Maclisp is because we do the programming in EMACS. The tools are in EMACS, not in Lisp.  Date: 11 Nov 1980 (Tuesday) 0406-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) To: alan at MIT-MC cc: LISP-forum at MIT-AI Re: control vs constraint Control and constraint are not the same thing. A control might be an audit trail. This is the type of control that I am talking about. I do not (necessarily) mean to disallow any extension to the language, simply monitored extension. For example, if I were to go into your MacLISP base load and shange the MAKE-LIST function all of a sudden in order to have the arguments appear in the order that I wanted to see them you would all have cows! All of your functions would stop functioning! Extensions (changes) to the base language that are not carefully thought out lead to real trouble in the future. Now, consider having written a large system with 100 functions or so. Suddenly you decide to change the internals of some function or another without regard for who or what calls it other than the caller that you care about! Insto-MESS! There are a couple of ways around this, 1- always make a copy of a function with a different name [as *THROW, *CATCH in recent discussion] -- this leads real fast to a proliferation of functions. The second way is to keep careful tabs on what functions call this one and the way that they have to be changed to match changes in this function. It is exactly this latter type of control (auditing function) that I mean should be included in NIL (or whatever). I am not against extending the language any more than I am against GOTOs (which is not very much). However, the system should provide a means by which a programmer can FOLLOW, CONTROL, AUDIT, extensions that he or the support personell make!  Date: 11 November 1980 13:50-EST From: Jon L White To: shrage at WHARTON-10 cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC Re: Sifting thru the "good mud": LISP vs APL I'd like to offer some comments of the topic which Shrager brought up last week, and except for "ditto"ing RMS's remarks about the sensibility of labor division between EMACS and LISP, I hope these will be points which others haven't yet brought out. It's worth noting that EMACS was brought up on MULTICS as a sub-system written in MacLISP, and a similar effort is under way now to bring it up in NIL (the NILE project, which is currently running to some degree on the NIL emulators of the PDP10). Date: 10 Nov 1980 (Monday) 0512-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) Subject: Improving the LISP working environment by taking a lesson from... APL! 1- When a programmer writes a function for his system it is "eaten" by LISP. That is, it becomes a part of the workspace as if it had been there all along. . . . 2- The CONTINUE workspace feature of APL permits a user to logout and come back an hour later, the next day, etc with his environment intact as he left it . . . 3- LISP has this one real unfortunate feature; it is too flexible and too simple simultaneously. This is the reason that it is soo hard to transport LISP systems from one machine to the next. "Basic LISP" is so trivial that noone can write anything in it. . . . Re poinst (1) and (2), we have had in MacLISP, for about 8 years, a division of space into "pure" and "impure"; initially it was for the benefit of MACSYMA, and then for other "dumped-out" systems, so that as many as possible of the pages of the dumped-out system could be shared among simultaneous users. This is not quite the same thing as the InterLISP MAKESYS facility, which generally gives you copy-on-write access to most of the dumped pages, but is something which places a burden on the system-builder to distinguish at load-up time between functions and data-bases which are essentially debugged, and those which are more volatile; redefinitions don't cause "private" copies of pages, but do consume address space since the pure stuff is never reclaimed. Such a division has the happy consequence that a "check-point" dump need write out only the pages which are "impure", and that when such a "check-point" dump is restored, it merely maps pages in (rather than trying to recreate the state of some workspace). In fact, we have several systems here (e.g., NACOMPLR the "NIL-aided COMPLR", and BXOWL on the ML machine) which are several levels deep in a cascade of such dumpings. That is, each level in the cascade causes to be dumped only the marginal difference between itself and the previous level; at "restore", or start-up, time the deeper level will actually be mapping in pages from numerous files -- one or more files from each level in the cascade. I may be wrong about this, but it seems unlikely that even a clever SYSOUT could do much better in terms of the size of the marginal dump file; it certainly could not be faster. Needless to say, we have similar facilities in NIL on the VAX. The LISPM has an "area" facility, rather than a "pure" and "impure" space concept, and from one point of view, this is adequate, since it is not expected to have several simultaneous users of a dumped-out LISP subsystem. This brings me up to point (3) -- LISP is too flexible. Well, the blame can hardly be laid at the feet of the language LISP, for its first incarnations were far too limiting! In fact, the community which uses LISP is a highly innovative group, and the decision to use LISP as a systems programming language (by such a rapidly evolving community) may be what gives it its appearant instability. When the data structures of lisp proved to be competitively inferior, we extended them (fast arithmetic in MacLISP, record-structured use of data in InterLISP and now in MacLISP, character and bit strings, etc.); when the control structures left something to be desired, we added them (after many groping experiments like MICRO-PLANNER, CONNIVER etc). When SMALLTALK showed us something new, LISP was our vehicle of choice in which to try something new (CLASSes of LISPM, and the EXTEND of NIL over which a similar CLASS heirarchy is built). Deutsch hit the nail on the head when he quoted Moses (approximately): "APL is a diamond -- you can't add anything to it, even another diamond, without ruining its beauty. Lisp is a ball of mud: you can keep adding more and more mud to it and its nature doesn't change." It's this ability to absorb the "good mud" of other languages which makes LISP so powerful and enduring; it will die if it ever fails to be "too flexible". Lastly, I'd like to remind the community that LISP and APL are a lot more alike, in a very crucial aspect, than this discussion would have led one to believe. What other languages besides machine language even admit dynamic variables (who cares if some pedagogical logicians lambast them as villains)? How effectively do other languages/systems allow you to have dynamic loading? Sure, BASIC is interactive, but who will prefer it over APL or LISP (yes, Virginia, all prominent LISPs are interactive, not just ...).  Date: 11 November 1980 1430-EST (Tuesday) From: Guy.Steele at CMU-10A To: SHRAGE at WHARTON-10 (Jeffrey Shrager) cc: lisp-forum at MIT-MC Re: Of diamonds and mud: Here's another paraphrase: "APL is like a diamond. You can add more diamonds... but unfortunately to get the several diamonds to stick together you need goop, which messes them all up. LISP is mud, and using mud to stick together mud at least preserves the homogeneity of the language." [Case in point: files can be regarded as large arrays of characters. But the file facilities of most APL systems don't treat them that way at all.]  Date: 12 November 1980 08:48 est From: HGBaker.Symbolics at MIT-Multics Re: Diamonds vs mud? I hope that I'm not being too chauvinistic, but I would rather think that I work with malleable GOLD rather than with hard diamonds. (We all like to think that our programs are as pretty as jewelry!)  Date: 12 November 1980 09:00 est From: HGBaker.Symbolics at MIT-Multics Re: Diamonds vs Gold It's going to be said anyway, so here goes: "There's Gold in them thar NIL's!!" (Sorry).  Date: 13 November 1980 08:10-EST From: Joel Moses To: LISP-FORUM at MIT-MC cc: JM at MIT-MC Re: LISP vs. APL I believe that I originated the line regarding LISP vs. APL. I still like the original version better than the Sussman/Steele one. Here goes: APL is like a diamond. If you try to extend it, that is like adding faces to the original ones, and the diamond will not shine as much. LISP is like a bean bag. Attempts to 'improve' upon it, with a new syntax, for example, will have the same effect as a bean bag. That is, the lispiness will shine right through.  Date: 2 FEB 1981 0338-EST From: DLA at MIT-AI (David L. Andre) To: FEATURE-LISPM at MIT-AI cc: LISP-FORUM at MIT-AI, DLA at MIT-EECS Re: Array modification feature A neat feature in arrays which would have been useful to me many times recently would be the ability to tell if any element of an array has been modified since some other time. Basically, this would involve one bit associated with an array which is turned on each time an ASET is done. Two separate primitives for the array would be able to get or set this bit, also. The advantage of this arises in the following type of situation: (progn (unmodify-array foo) (do-something-which-might-modify foo) (if (modified-array foo) (make-consistent foo))) Where presumably MAKE-CONSISTENT is a rather time consuming function. Also, something might want to PROCESS-WAIT until foo has been modified. I have a hunch that a feature like this wouldn't be too hard to implement on the LISP machine (although, granted, I don't know). Perhaps another bit could be set upon AREFing an array. Just what to do about ALOC, I don't know, although for my purposes, it doesn't matter, because I'm using numeric arrays... Any other thoughts on this? How difficult would implementing such a feature be? -- Dave  Date: 2 February 1981 08:39-EST From: John G. Aspinall To: DLA at MIT-AI cc: LISP-FORUM at MIT-MC Re: Array modification feature From: DLA at MIT-AI (David L. Andre) A neat feature in arrays which would have been useful to me many times recently would be the ability to tell if any element of an array has been modified since some other time.... The advantage of this arises in the following type of situation: (progn (unmodify-array foo) (do-something-which-might-modify foo) (if (modified-array foo) (make-consistent foo))) I have a situation like this involving redisplay of graphic objects if they have been changed by (do-something-which-might-modify foo). However, I find nothing wrong with a stack onto which do-something-which-might-modify pushes the array subscripts which have been modified. Unless the modification is so trivial that the stack push adds a significant amount of effort to the modification function, this seems like a way to do it without modify bits for every word.  Date: 2 Feb 1981 0913-EST From: Dave Andre To: JGA at MIT-MC cc: DLA at MIT-EECS, lisp-forum at MIT-MC Re: Array modification feature I was suggesting one bit for every array, not one for each element. [Date: 9 February 1981 20:28-EST From: Richard J. Fateman To: JONL at MIT-MC, JPG at MIT-MC cc: MACSYMA-I at MIT-MC, RWG at MIT-MC, LISP-FORUM at MIT-MC the atan function on the vax returns a value in the range -pi to pi, given the 2 arg form. on the pdp-10 it seems to be 0 to 2*pi. This causes some problems. Who is right? Note that macsyma CHANGES the value returned from lisp at mit to what it is on when macsyma runs on the vax, it overcompensates..  Date: 11 February 1981 06:38-EST From: Jeffrey P. Golden To: RJF at MIT-MC cc: JONL at MIT-MC, MACSYMA-I at MIT-MC, RWG at MIT-MC, LISP-FORUM at MIT-MC MACSYMA now uses LISP's atan function's range only between 0 and pi, so MACSYMA should work consistently in all LISPs, assuming you are using the latest sources. It only took a trivial change to the code. RDate: 31 Jan 1981 16:33:30-PST From: CSVAX.jkf at Berkeley Re: upper and lower case Could someone tell me why Maclisp and Lisp Machine lisp allow only upper case printnames for symbols? My guess is that Maclisp does it because it saves space, but why does Lisp Machine lisp do it? Are there Lisp Machines with upper case only keyboards? Does the Lisp Machine use upper case only because Maclisp does? What about Multics Maclisp, does that allow both cases in printnames? And finally, will Nil be a single case lisp system too? I ask these questions for two reasons, one is that I am simply curious. The other is that Dave Barton (drb) has written a symbolic algebra system which requires a two case Lisp system. Dave follows the mathematical convention of using upper case names to denote sets (like Z for the integers) and lower case to denote variables (like z). In order to run Dave's code on any of the aforementioned Lisp's we will have to do something ugly, like translating upper case characters like Z to *Z. I think that the use of upper and lower case adds a third dimension to print names. I would call names which are restricted to a single character one dimensional names (such is the case in many Basic [ugh] systems). By allowing the names to be any length you add a second dimension. Finally by allowing both cases you get the third dimension. Finally, a lisp system with both case capability can easily masquerade as a single case system. ::: jkf :::  Date: 31 January 1981 20:30-EST From: George J. Carrette To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: upper and lower case I think it comes down to readability and what people are used to around here. First of all, the character set of Roman upper and lowercase as we know it today wasn't exactly designed for maximum readability during late night hacking. Also, people get used to being able to use the dimensionality of case to make code pretty without having it be a required part of the language. People use and and the same way, even though they are pretty much interchangable from the lisp readers point of view. The default recognization of names in our text editors (the Search commands) are also case insensitive. I think this is related to the general sloppiness in use of capitalization rules. The strongest argument may be the following: lisp programmers "talk" about their programs, they do not dwell on how they look on paper, they do not visualize them in terms of characters. Spoken communication is much the same, capitalization and case do not get past the footlights, although good punctuation is very important. The present situation is historical, and may change of course. Maclisp on Multics is fullcase, although it can be changed to translate case, PDP10 Maclisp, NIL and LispmLisp all have controllable case translation.  Date: 31 January 1981 20:41-EST From: Howard I. Cannon To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: upper and lower case Date: 31 Jan 1981 16:33:30-PST From: CSVAX.jkf at Berkeley Could someone tell me why Maclisp and Lisp Machine lisp allow only upper case printnames for symbols? . . . No, because they don't allow only upper case printnames! You are being confused by the fact that both MacLISP and the Lisp Machine translate lower case characters in symbols to upper case ON INPUT. You can type lower case characters into symbols by quoting them (either using slash for a single character or by using vertical bars for multiple characters). It is not hard in either Lisp to prevent this translation, if you have code which relies on mixed case symbols. We are of the opinion (I of course don't speak for everyone, but...) that distinguishing between upper and lower case makes it very hard to remember exactly how to "spell" things. I don't agree that case adds another dimension, it's just half a dimension, and that's where the trouble arises. People tend to be inconsistent with casification, because in many cases there is no clear best way (witness Maclisp, MacLisp, MacLISP, MACLISP, etc..). Also, if someone arbitrarily picked one of these spellings, and "enforced" it, then people would probably be hard-pressed to remember which one was choosen. This of course does not agree with the Unix philosophy. However, from what I have seen of Unix, programs tend not to use upper case at all (probably for this reason. For my information, how many Shell commands use upper or mixed case?). --Howard  Date: 31 January 1981 20:45-EST From: Daniel L. Weinreb To: CSVAX.jkf at BERKELEY, lisp-forum at MIT-MC Re: upper and lower case Here is the story regarding the Lisp Machine: it is not true that Lisp Machine Lisp allows only upper case printnames for symbols. What is true is that the Lisp reader upcases alphabetic characters when it sees them as parts of symbol names. This property of the reader is not wired into the system in any deep way; it would be simple to modify the reader to not do this upcasing at all. It might be only a change to the readtable, or it might be a minor code modification; I am not sure which. It should not be hard to make Dave's code run. The Lisp printer also knows about the reader and knows to slashify (usually with vertical bars) symbols with lower-case letters in them; this is also not hard to change. While compatibilty with ITS Maclisp was one consideration in our decision to have the reader do this, it was not the primary reason. We chose to do things this way because our experience with systems that distinguish case is that they lead to so much confusion that the added gain of being able to have symbols whose names differ only in case is small compared to the confusion caused thereby. A particular case is the Multics file system, in which users constantly get confused and have things break because they were not careful about the case in which they typed something. Experienced Multics users hardly ever use mixed cases; they stick to one case (lower-case, as it happens) for almost all names. In essence, the reason the Lisp Machine reader ignores case is because PEOPLE usually ignore case. We decided that the users would be better served by putting up with the slight restriction in return for the lack of confusion. Other minor points: no, there are no upper-case-only keyboards for Lisp Machines (in fact, our keyboards have seven different shifting keys (shift, control, meta, hyper, super, top, and greek); they are known locally as "Space Cadet" keyboards). Multics Maclisp does distinguish case in symbol names, which is compatible with the way the rest of Multics works. Maclisp is presumably that way because it dates from before lower-case terminals were common. I don't know about NIL. The Lisp Machine also does not distinguish symbol names with different typefonts, for the same reason it ignores case.  Date: 31 Jan 1981 20:11:44-PST From: CSVAX.jkf at Berkeley Re: upper and lower case To: gjc,hic,dlw Thank you for clearing up my ignorance of Maclisp and Lisp Machine lisp. Gjc, you mentioned that PDP10 Maclisp has a mechanism for handling lower case print names. Is this mechanism the same as described by dlw, that is do you have to escape lower case characters or can you just do a sstatus somethingorother? Enough practical questions, back to the discussion. I can see that people are going to chose sides on this argument based on the operating system they are most familiar with. When I was an undergraduate, I worked on a PDP10 system whose file names (and thus command names) were all upper case. Life was simple for the programmer, he simply converted everything to upper case. We could justify it by pointing out that 95% of our terminals were upper case only and thus we weren't going to put in code to handle the lower case terminals in a special way. When I started using Unix, it seemed very strange to me to see everything in lower case but now that I am accustomed to it I can appreciate the advantages. Certainly, with the added freedom of upper and lower case command names, there is a possibility of mass confusion if there are no standards. HIC asked how many commands executed from the Unix shell use upper and lower case. There are very few, and they use upper case for a purpose. We have a simple mail handler called `mail' and a super mail handler called 'Mail' (this is probably similar to :mail and :rmail on ITS). We have a simple spelling checker called `spell' and a more powerful one called `Spell'. I have personalized my shell to understand that the commands `Cifplot' and `Caesar' refer to the development versions of the system programs `cifplot' and `caesar'. I think that most people here would be very disappointed if all of the sudden they couldn't use both cases. From: GJC . ... The strongest argument may be the following: lisp programmers "talk" about their programs, they do not dwell on how they look on paper, they do not visualize them in terms of characters. Spoken communication is much the same, capitalization and case do not get past the footlights, although good punctuation is very important. Spoken communication is potentially much more powerful than written communication. Thus when you have to write down your program you need all the help you can get. One way to get this power is to use different case characters. To go back to the example in my previous letter, just think of the difference between Z, Q, R and z, q, r in an algebra system. There is a lot of information contained in the case of the symbol. Another question, if you grind a file containing symbols like Maclisp, MacLisp and MacLISP, do they all come out MACLISP? If you always write FooBar in your code, aren't you bothered when this comes out FOOBAR in traces and prints?  Date: 1 FEB 1981 0128-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Case Lots of people write code for Maclisp and the Lisp machine in lower case. I do it a lot of the time. On the other hand, sometimes I feel like putting code in upper case. I can't really decide which I like better. So I'm very glad that I can call the same functions no matter which case I feel like using. With single character names it is not very hard to remember which of the two possibilities is desired. But you shouldn't generalize from the example of "z" vs "Z". With longer or compound names there get to be more ways to choose from and it is harder to remember which one is "right". The place where most people run across this is in sending mail to people on Multics. Multics decided that it would be keen to distinguish case in names of users, as in everything else. The result is that people constantly got the case wrong when sending mail there and got their mail rejected. Eventually Multics implemented a special hack for ignoring case in the names of people who frequently got net mail.  Date: 1 FEB 1981 0139-EST From: RMS at MIT-AI (Richard M. Stallman) Re: More case There are a few systems that ignore case when comparing your input against existing symbols, but remember whatever case you use when you first define the symbol. EMACS is an example. It remembers function and variable names using both cases, but it ignores the case you use to refer to them. This way, you can make the output look the way you like without ever being screwed. Even in a more conventional system such as Maclisp which converts to upper case in the symbol names as remembered, the question of how to do OUTPUT is a separate one. Because either case means the same thing, you are free to output whichever one you like. There would be no reason why you couldn't use a printer which printed everything in lower case by default. Or one which used a property of the symbol to decide how to casify it on output. The property would be missing for a symbol to print with only its first letter in upper case. Other values of the property could be used to select different ways. We would probably mind if the grinder clobbered the case of our programs as we wrote them. We also minded that it clobbered the line-breaking and spacing in our programs, even though those are certainly not important to the machine. So we don't grind files any more. We just do the formatting with EMACS. Notice how I emphasized the word "output" by putting it in upper case. I was able to do this because the definition of "OUTPUT" is the same as that of "output". If they were two different words then such emphasis would be impossible.  Date: 1 February 1981 13:17-EST From: George J. Carrette To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: upper and lower case I see that people have already replied that you will have no problem getting DRB's code to read correctly in MacLisp or LispmLisp. Anyway, if DRB had a Lispm I'm sure he would like to take advantage of some of the really nice mathematical symbols available. He could even design his own; nobody has yet taken full advantage of these keyboards. Since the emphasis is probably on special symbols, we could have #\LAMBDA #\DELTA type things for printing them to filesystems which don't handle all the bits per character needed to support them as characters. If one read #\LAMBDA into a ZWEI buffer for instance, I see no problem with having that appear on the screen as . For multiple character symbols, of which there would be very few, we could have #\(LAMBDA DELTA). I think this fits in with having #\ do a read. The only thing funny is that it creates something which has different print methods for different kinds of streams.  Date: 1 Feb 1981 15:51:56-PST From: CSVAX.jkf at Berkeley Re: more on mixed case People will always establish conventions which permit them to get their ideas across in the shortest amount of time. The function KMP wrote named `Car' would be pronounced `capital car' here and there would be no confusion between that and the one pronounced `car'. Admittedly there is no convention for pronouncing `cAr' but if we every found ourselves writing things like this, then someone would find a way to pronounce it. I think that you are exaggerating the communication problem. Suppose I wrote this function: (defun macrop (FunctionName) (and (getd FunctionName) (or (and (bcdp (getd FunctionName)) (eq 'macro (getdisc (getd FunctionName)))) (and (dtpr (getd FunctionName)) (eq 'macro (car (getd FunctionName))))))) Now suppose word gets around about my great new function to tell if a function is a macro or not. Someone asks me how it works. I say " it takes functionname as an argument, checks to make sure functionname has a function binding and then if its binding is a binary object ..." I don't have to tell him that functionname is written FunctionName, that is totally irrevalent. And in fact, most of the symbols in most lisp programs are local variables, and many functions in lisp files are local to one file and thus the author should be able to write them for maximum readability to himself. Ok, you never use GRIND anymore (neither do we), but you do use compilers and I believe that the error messages from the compiler should use the representation the programmer used when printing errors and warning. While the case doesn't affect the meaning in English, it does convey information. There is a difference between writing "you are wrong" and "YOU ARE WRONG". Aren't you a bit offended when someone writes a passage in capital letters, ISN'T IT JUST LIKE THEY ARE YELLING AT YOU? The use of the same letters with different casification (??) can be very helpful in distinguishing functions with the subtle differences in meaning. I can see someone wanting to use KMP's Car function in his code as well as the standard car. In a one system where the default is to single case things, you would have to call Car something else or have to escape the lower case characters, both options would result in an uglier program (very personal opinion). Multics has had some growing pains but that is to be expected. The use of both cases in names which must be used by many users or between computers must be controlled. I guess that we have done pretty well here because I haven't heard any complaints. This whole issue extends beyond Lisp. The other languages we run here treat cases distinctly (except Fortran and Pascal when it is put in `Standard Pascal' mode). Most people tend to use mixed cases in their code although there are very few places in the system libraries where mixed code is used. One convention in C is to use all upper case to denote a macro. I have randomly searched through directories and come up with a segment from a typical C program and a typical Model program to demonstate their use. Most typical is the use of case to delimit words rather than other seperators, such as underscore. C program segment: (void) PSetPath("/dev"); if (display == NULL) error("no display specified (-g)"); grtty = POpen(display, "w", (char *) NULL, (char **) NULL); if (grtty == NULL) error("couldn't open display"); (void) PSetPath(path); Model program segment: MakeCopyOnly inline beginproc formal copyOnly{t, s} c (varies); integer i (varies); noresult; copyOnlyData temp; temp.data := i; c := temp \ copyOnly{t, s}; endproc;  Date: 1 February 1981 17:21-EST From: Earl A. Killian Re: More case I believe that InterLisp originally uppercased things on input, and then stopped. This had two effects: 1) you could then use mixed case in your programs (and many users now do, using case changes instead of "-" or "_" to separate words in a symbol, e.g. "NumberOfFrobs"), and 2) you had to start typing predefined symbol names in uppercase (though since DWIM will correct case errors without asking, this is sort of like EMACS's handling of case: remember it on definition and match any subsequent case usage on input). The InterLisp approach (except for the DWIM hack) can be done in MACLISP (and I believe the LISPM) with a simple SSTATUS. Unfortunately, this tends to be fairly ugly, in my opinion, because the predefined symbols are all uppercase, though there are those that like it that way. I also find the current MacLisp practice ugly, though at least I see it only when in LISP system, and not in the editor. (Uppercase letters YELL at you instead of speaking in a normal voice; that's why saying "this does NOT frob" works. I don't like being yelled at by my computer all the time.) My preference would be to have predefined symbols be lowercase, and have the reader lowercase on input, instead of uppercase. Not very much code would break that way. Users that wanted case sensitivity could then do the simple SSTATUS, and type in their stuff mostly in lowercase, using uppercase where appropriate. The only other way to win, seems to be to hack it on output as RMS suggests. I'm not sure how I feel about that yet. I presume that symbols FOO and |foo| would not both print as "foo", but as "foo" and "|foo|"? If that's the case then there is no ambiguity. Presumably |z| is too ugly for RJF's purposes, so this doesn't really provide him a good way to optionally have case sensitivity.  Date: 1 Feb 1981 16:37:58-PST From: CSVAX.jkf at Berkeley Re: a quick question about cases My version of the Lisp Machine Manual writes all functions in lower case and notes that all characters will be translated to upper case. Why was this done? If the user typed (|car| '(a b c)) would he get an undefined function error? Regarding EAK recent letter, let me just clear one thing up. RJF has nothing to do with this discussion. The code in question was written by Dave Barton.  Date: 1 FEB 1981 2018-EST From: BEE at MIT-AI (Bruce E. Edwards) To: LISP-FORUM at MIT-AI, CSVAX.jkf at BERKELEY Re: Question about case The symbol CAR and |car| are different, so he would get an undefined function error.  Date: 1 February 1981 16:51-EST From: Bruce E. Edwards Re: upper and lower case As a humble programmer who has used both lisp and UNIX, I would prefer if lisp did differentiate case. One reason, is that I like the convention that some people use in C that global variables are Initially Capitalized, and that constants are COMPLETELY CAPITALIZED. The fact that you have two cases, does not seem to cause people to have problems remembering the capitalizations, because nobody I have seen names variables CompLETEly-RANdomly-CASe-VarIABLE. As for mail names, any reasonable mailer should also recognize Bruce Edwards, and mail it to me, along with BEE, Bee BeE and everything else. I would be interested in how people who have used PL1 feel about this.  Date: 1 February 1981 22:28-EST From: Kent M. Pitman To: RMS at MIT-MC cc: EAK at MIT-MC, LISP-FORUM at MIT-MC Re: Casifying at output time doesn't win... The following case comes up every time we have had to discuss this in GRINDEF bug mail. I don't think the I/O scenario you described will handle it... Consider: (PRINT (READ))(X x |X| |x|) then, X must be EQ to |X| x must be EQ to |x| |X| must have a print property that says he needs vbars (or equiv) on output. X must have a print property that says he accepts the default case. |x| must have a print property that says he needs vbars (or equiv) on output. x must have a print property that says he accepts the default case. Actually, only 3 of the last 4 statements may need to hold at any given time. The first two are not jointly satisfiable, however, since the symbols are eq. The second two likewise. Hence, I don't see how you can make output do the right thing as you describe. The problem is harder for multi-character symbols. I don't think your scheme would hold up. If indeed it would, I would like to hear further details. I continue to uphold that case translation unless specifically requested a la / or |...| should be done. The fact that Lisp happens to translate to uppercase rather than lowercase is probably unfortunate. But the fact that it translates is NOT unfortunate. It is quite definitely a useful and comfortable feature.  Date: 1 February 1981 23:57-EST From: George J. Carrette To: EAK at MIT-MC cc: LISP-FORUM at MIT-AI Re: More case The InterLisp DWIM of CAR=CAr=CaR=Car=cAR=cAr=caR=car is quit trivial to put into the maclisp error handler. Putting it into the compiler and still having open-coding of CAR is dubious of course. So much for DWIM. -gjc  Date: 2 February 1981 00:06-EST From: George J. Carrette To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: more on mixed case You mentioned compiler error messages. Well, since a very common way of calling the compiler is COMPILE-BUFFER in the text editor, and since the lisp code in the text editor is represented in a special way which includes the vertical and horizontal position of every token, it possible for errors to be flagged by drawing red-lines around problem code, just like the nuns used to do to papers we wrote in elementary school. When people have the hardware and software to do these kinds of things, they don't worry too much about the added dimension of case in symbol print-names. -gjc  Date: 2 February 1981 03:08-EST From: Daniel L. Weinreb To: CSVAX.jkf at BERKELEY, lisp-forum at MIT-MC Re: a quick question about cases The Lisp Machine Manual has function names in lower-case because the old Maclisp Manual printed them that way, and Dave and I liked the way it looked in the manual. Personally, I, too, would prefer that the single case in our single-case system be lower case by default, but I don't care too much. The Maclisp Manual was presumably that way because either case works on ITS but only lower case works on Multics, and the manual applies to both dialects.  Date: 2 February 1981 03:16-EST From: Earl A. Killian To: GJC at MIT-MC cc: LISP-FORUM at MIT-AI Re: More case Date: 1 February 1981 23:57-EST From: George J. Carrette The InterLisp DWIM of CAR=CAr=CaR=Car=cAR=cAr=caR=car is quit trivial to put into the maclisp error handler. Putting it into the compiler and still having open-coding of CAR is dubious of course. So much for DWIM. This has nothing to do with what I was saying. It's also false.  Date: 02/02/81 07:00:07 From: RMS at MIT-AI To: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: Casifying at output time wins for what I want it to do. Date: 1 February 1981 22:28-EST From: Kent M. Pitman Subject: Casifying at output time doesn't win... To: RMS at MIT-MC cc: EAK at MIT-MC, LISP-FORUM at MIT-MC The following case comes up every time we have had to discuss this in GRINDEF bug mail. I don't think the I/O scenario you described will handle it... Consider: (PRINT (READ))(X x |X| |x|) then, X must be EQ to |X| x must be EQ to |x| At this point my actual idea and your idea of my idea part company. You seem to think I am trying to arrange for each use of the symbol X to print out the way it was typed in. I'm not. I certainly want both x and X to turn into |X|, always, and both of them would print out the same. The question is, do they both print out as X or both print out as x? This is what I would like the user to be able to specify. The intended feature is: if a user uses the symbol Together and always casifies it that way, then it will print back out as Together. (And if once in a while he types in just together, it will still print as Together). If another user always writes ToGetHer, then for HIM, it will always print as ToGetHer. (And if once in a while he types just together, it will still print as ToGetHer). For both of these users, the symbol actually appearing in memory is |TOGETHER|. If I write a program to call the second user's ToGetHer function, I can write it any way I like. It will still print out ToGetHer. This is because his program will contain something to specify how to print that symbol, and my program will (most likely) not contain anything to say how to print it out. Anyway, I'm really quite satisfied with the way Maclisp works now.  Date: 2 February 1981 15:29-EST From: Kent M. Pitman To: RMS at MIT-AI cc: LISP-FORUM at MIT-MC Re: but you're losing flexibility... Date: 02/02/81 07:00:07 From: RMS at MIT-AI To: KMP, LISP-FORUM Re: Casifying at output time wins for what I want it to do. Date: 1 February 1981 22:28-EST From: Kent M. Pitman Subject: Casifying at output time doesn't win... To: RMS at MIT-MC Consider: (PRINT (READ))(X x |X| |x|) then, X must be EQ to |X| x must be EQ to |x| At this point my actual idea and your idea of my idea part company. You seem to think I am trying to arrange for each use of the symbol X to print out the way it was typed in. I'm not. I certainly want both x and X to turn into |X|, always, and both of them would print out the same. The question is, do they both print out as X or both print out as x? This is what I would like the user to be able to specify. The point here is that suppose that in one critical piece of code, the guy had *really* and *truly* wanted to force uppercase "TOGETHER" to be the way the symbol was cased on input and output. He couldn't do it. He would write (defun f (|TOGETHER|) ...) and his output would come out "(defun f (Together) ...)" or "(defun f (ToGetHer) ...)" but his output stream handler would have no way of allowing him variation of case. One would rarely want such, but if he did, your algorithm doesn't seem to provide for it ... I take it that what you are saying though, is that if he wants this sort of thing, he should be willing to pay this price. Anyway, I'm really quite satisfied with the way Maclisp works now. I'm glad of this. So am I. The above discussion is not really so much for your benefit as for the benefit of those who haven't thought as much about this.  Date: 4 February 1981 03:58-EST From: Robert W. Kerns Re: Oh God! Still More Case! [I haven't said anything about case yet at all, honest!] I often use case in ways that depends on my being able to use both upper and lower case to mean the same symbol. For example `(DEFUN ,name ,(nreverse arglist) (INITIALIZE-DATABASE (NREVERSE ,database-name)) ,@body) uses UPPER CASE as an additional anotation to highlight constants, as opposed to code which will be compiled away. Or (putprop foo bar 'MY-PROP) I feel that having both cases defaultly map to the same thing (without quoting) provides greater flexibility. In the first example, I was able to distinguish two ways of using NREVERSE. These weren't two different symbols NREVERSE and nreverse. Without this feature I'd have had call them both either NREVERSE or nreverse.  Date: 4 February 1981 21:14-EST From: Bruce E. Edwards Re: (Spurious argumentation) Objects that are the same should be represented the same. In this recently presented example: `(DEFUN ,name ,(nreverse arglist) (INITIALIZE-DATABASE (NREVERSE ,database-name)) ,@body) NREVERSE called in the backquote is the same nreverse that will be called by the program, so they should be printed the same. No one suggested that we map the control characters into the UPPER CASE characters so that I can write this code ``(DEFUN ,name ,(nreverse arglist) (INITIALIZE-DATABASE (NREVERSE ,database-name)) ,@ ,(^N^R^E^V^E^R^S^E body))  Date: 5 February 1981 08:16-EST From: Bruce E. Edwards To: GJC at MIT-MC cc: LISP-FORUM at MIT-AI, RWK at MIT-MC Re: spurious argumentation. The point of the message was somewhat "spurious argumentation" on my part. That is what the subject meant, I realize now it is open to misinterpretation. The point of my message was that case is important to differentiate different objects, and that should not be used to differentiate different uses of the same object. (In my opinion of course). I was also trying to make the point, that if I use case to differentiate different usages, there is not nearly enough case information, and maybe I would want to use fonts and other information for that. What if there were two levels of backquoting? Would you want another degree of freedom? That was the point of the message.  Date: 6 February 1981 04:00-EST From: Robert W. Kerns To: BEE at MIT-AI cc: GJC at MIT-MC, LISP-FORUM at MIT-AI Re: spurious argumentation. You're certainly right that upper/lower is only one bit of information. I have on occasion wanted to use several colors in my code. There ARE people who use fonts to highlight their code.  Date: 02/06/81 14:53:17 From: JL at MIT-MC Isn't it possible (my stupidity showing), that at EVAL time the object which represents the FUNCTION-NAME should get uppercased but that all objects on which the function works retain caseness? That way (setq foo '(car a)) => (car a) foo => (car a) (setq a (cons 'b 'c)) => (b . c) (eval foo) is the same as (CAR a) => b NOT B? Even if I did get something wrong in my example, the intention should be clear. The uppercasing happens at EVAL time not at READ time. Okay, what's my punishment for my stupidity?  Date: 6 February 1981 15:59-EST From: George J. Carrette To: JL at MIT-MC cc: LISP-FORUM at MIT-MC Re: symbols and printed representation One problem with your suggestion is that people expect symbols which are EQUAL (in the sense that they have the same property lists, Value, Function-Cell, whatever else), to be EQ. Certainly this doesn't have to be true, much code simply uses the primitives PUT/GET/REM, FSET, SYMEVAL,SET,FSYMEVAL on symbols, that code would run the same. But some code does (probably wrongly) hack Pnames of symbols, and some code uses EQ. That code would break. However, there is nothing sacred about representing VARIABLEs and FUNCTIONS as symbols, I mean, it didn't come down on stone tablets. But people do depend on it. -gjc p.s. This is an interpretation of your question, since I am assuming caseification really wants to be at read-time.  Date: 9 February 1981 07:58-EST From: Robert H. Berman To: RWK at MIT-MC cc: GJC at MIT-MC, BEE at MIT-AI, LISP-FORUM at MIT-AI Re: spurious argumentation. Date: 6 February 1981 04:00-EST From: Robert W. Kerns You're certainly right that upper/lower is only one bit of information. I have on occasion wanted to use several colors in my code. There ARE people who use fonts to highlight their code. Guilty here. I find fonts very helpful to write code. For very large programs colors are helpful too. RHB writes in red, JLK writes in blue, CWH writes in green..... Another use of color is to identify fragments of code and link them together. ("Let's run the gold vresion of the compiler today...") Paisley anyone? Date: 31 October 1980 17:00-EST From: Kent M. Pitman cc: "(FILE [DSK:LSPMAI;CATCH QUERY])" at MIT-MC Re: *CATCH/*THROW -- CATCH/THROW From .INFO.;LISP NEWS ... Sunday Sept 17,1978 FM+11H.57M.58S. LISP 1742 -HIC, JONL- Changes affecting all LISPs: [3] *THROW, *CATCH, CATCHALL, CATCH-BARRIER, and UNWIND-PROTECT. For compatibility with the new format of CATCH and THROW in the LISP Machine, and in NIL, we are introducing *THROW and *CATCH. Please use these, and sometime in the future, CATCH and THROW in PDP10 MACLISP may be changed also. It is recommended that old code be converted to *CATCH and *THROW. From the Lisp Machine manual, p34 catch Macro throw Macro catch and throw are provided only for Maclisp compatibility. ... ----- RLB has proposed that we go through with the flushing of the old CATCH/THROW meanings and re-defining their functionality so that they ``do what they should have done initially, which is what *CATCH/*THROW do now.'' This would have the following effect: * For some period of time (as yet unspecified, probably a few months) CATCH/THROW would be undefined in the interpreter and the compiler would continue to generate useful error diagnostics (as it has done for several months already). * After such time, the names CATCH/THROW would become synonymous with *CATCH/*THROW. * Presumably, a large amount of code currently references *CATCH and *THROW so we would have to allow a longer time for this last changeover, but the eventual aim would be to eliminate the names *CATCH and *THROW from the language. We are interested in hearing comments from the user community about how much of an inconvenience this would be and what time spans would be necessary to accomplish each phase of this transformation process. To respond, send mail to CATCH-THROW@MIT-MC. Replies will end up in the file MC:LSPMAI;CATCH REPLY if you want to see what others have to say on the issue. -kmp  Date: 5 November 1980 1142-EST (Wednesday) From: Guy.Steele at CMU-10A To: catch-throw at MIT-MC cc: lisp-forum at MIT-MC Re: Using * in names I must agree with Dick Waters and others, that code should use *CATCH/*THROW from now on, but the old names CATCH and THROW should not go away or be recycled, and *CATCH/*THROW also should not go away. But to prevent such glitches in the future, I have a suggestion: when a feature is first implemented (say a function GLOTZ), it should from the outset be called *GLOTZ. Then when someone eventually points out what a loser it is, one can invent the better version and call that one just plain GLOTZ. If *GLOTZ turns out to be okay, then after two years the name GLOTZ can be phased in as a synonym for *GLOTZ.  Date: 5 November 1980 13:13-EST From: Kent M. Pitman To: RWK at MIT-MC, LISP-FORUM at MIT-MC, Guy.Steele at CMU-10A cc: "(FILE [DSK:LSPMAI;CATCH REPLY])" at MIT-MC Re: *****Idea! I considered solutions like having *sym intern specially. Eg, given CATCH and *CATCH already on your obarray, then *******CATCH would intern as *CATCH (removing redundant *'s -- the original *CATCH is of course entered with |*CATCH|. This would be nice when a new CATCH came along because it could become CATCH and *CATCH could become **CATCH and ******CATCH would now be interned as **CATCH so it would still win. Hence, people who want to just get ahold of a feature and use it and make their code really safe would use lots of ***'s ... eg, *******************************CATCH would intern the same as the oldest version of CATCH until that many *-versions were released. People that wanted a particular newer version would be resigning themselves to fight the name game -- ie, they'd use |**CATCH| or |CATCH| and they would be screwed when the new version comes out, but that's what they'd get for trying to be coding using the `current fad' in CATCH's as opposed to being happy with the fine functionality offered by ****...****CATCH in the first place. It has its disadvantages because presumably if you pick 150 *'s (a fairly conservative number), you will eventually (at release of version 151 of CATCH) have to do something. A syntax for an infinite number of *'s might be nice -- for the person to say explicitly that he wants to be (1) maximally correct and (2) maximally out of date with `style' ... The other alternative is the more concise CATCH[0] notation which names the version number of CATCH which you want. This has the attractive feature of what I think it was GLS who suggested at one point at Lisp conference -- DWTCMY (Do What This Code Meant Yesterday) [if not yours, GLS, sorry -- I can't think of anyone to pin it to ...] So newer versions are just added as CATCH[1], CATCH[2], etc. Fancy hacks to get intern to share the PNAME word of all of these could probably be arranged. This also generalizes nicely to GLS's output radix print representation -- 555[1] is the outmoded way of expressing 15., 1111[2] is more recent, 17[8] is more recent, and 15[10] even more recent. Presumably some day we will adopt F[16] as the standard and ... well, who knows how far version numbers could take us.  Date: 11 Nov 1980 (Tuesday) 0406-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) To: alan at MIT-MC cc: LISP-forum at MIT-AI Re: control vs constraint Control and constraint are not the same thing. A control might be an audit trail. This is the type of control that I am talking about. I do not (necessarily) mean to disallow any extension to the language, simply monitored extension. For example, if I were to go into your MacLISP base load and shange the MAKE-LIST function all of a sudden in order to have the arguments appear in the order that I wanted to see them you would all have cows! All of your functions would stop functioning! Extensions (changes) to the base language that are not carefully thought out lead to real trouble in the future. Now, consider having written a large system with 100 functions or so. Suddenly you decide to change the internals of some function or another without regard for who or what calls it other than the caller that you care about! Insto-MESS! There are a couple of ways around this, 1- always make a copy of a function with a different name [as *THROW, *CATCH in recent discussion] -- this leads real fast to a proliferation of functions. The second way is to keep careful tabs on what functions call this one and the way that they have to be changed to match changes in this function. It is exactly this latter type of control (auditing function) that I mean should be included in NIL (or whatever). I am not against extending the language any more than I am against GOTOs (which is not very much). However, the system should provide a means by which a programmer can FOLLOW, CONTROL, AUDIT, extensions that he or the support personell make! ADate: 10 April 1981 17:38-EST From: George J. Carrette To: BUG-LISPM at MIT-MC cc: LISP-FORUM at MIT-MC Re: Splicing macro's & CGOL. I have brought the Pratt parser for CGOL up on the lisp machine. Therefore the question of how to introduce CGOL syntax expressions comes up. I don't even want to mention how it is done in maclisp. Intuitively, what I want is a splicing macro. Let's say I used sharp-sign-altmode to mean read in a sequence of cgol expressions. However, splicing-macro's don't work at top-level, so I'll tell you what I really need, a continuation macro. I'll try to be exact here so that there is no confusion. The toplevel READ nominally receives two arguments, the STREAM to READ from, and a continuation to which to return. If a continuation macro is read at top-level then what it gets passed is the stream and the continuation. (defun sharp-altcont (stream continuation) (DO ((FORM (CGOLREAD STREAM) (CGOLREAD STREAM))) ((EQUAL FORM '(EXIT)) NIL) (FUNCALL CONTINUATION FORM))) To me the continuation passing makes a lot of sense for any situation thats doing a READ-FROBULATE loop (actually MAPPING) over a stream. Both COMPILE and LOAD could use it. In the mean time I can make my splicing macro return a "PROGN-QUOTE-COMPILE" when it needs to.  Date: 11 April 1981 17:26-EST From: David A. Moon To: GJC at MIT-MC cc: BUG-LISPM at MIT-MC, LISP-FORUM at MIT-MC Re: Splicing macro's & CGOL. What you actually want is to setq READTABLE to a value which causes cgol syntax to be read. You certainly do not want to mess with the control structure of whatever program happens to be reading forms. You also do not want to read ahead through all the cgol forms and buffer them back; what if the program does TYIs in between its READs, or changes IBASE or PACKAGE? It is (of course) important that whatever way it is done work interactively as well as for reading from files. Neither the Lisp machine nor Maclisp has an extensible enough reader to do this now. It has been planned for a long time to do this in the Lisp machine and I imagine would not be very hard. One way would be to make READ check (TYPEP READTABLE 'READTABLE) and if not send a message to it. A less general way would be to implement CGOL using the normal reader rather than its own special parser; I believe the Lisp machine reader has the flexibility to do this, using the readtable compiler, although there might be subtleties I have overlooked. Whatever cgol syntax means return to normal syntax would setq READTABLE back to whatever it was before. Of course, in many cases you would not switch syntax back and forth with reader macros, but would put -*- Syntax:CGOL -*- at the front of your file, which would bind READTABLE to the appropriate thing whenever anything read from the file. All the mechanisms for this already exist except the above-mentioned hook in READ.  Date: 11 April 1981 17:39-EST From: Jon L White To: GJC at MIT-MC cc: BUG-LISPM at MIT-MC, LISP-FORUM at MIT-MC Re: Splicing macros & CGOL Some time ago, QUUX and I proposed a capability for a stream to be a READ stream, rather than an CHARACTER stream; as the latter bufferes up characters from some real I/O channel, so the former would buffer up lisp s-expressions by reading from some lower-level character stream. Your CGOL problem seems entirely a metter of having the capability to create and use such streams -- parsing the CGOL syntax is not much of a problem, for that can be written in a rather small amount of LISP code (especialy if one has a token-reader, for which MacLISP's ordinary reader could be used); the real problem as you outlined it was the necessity of reading multiple forms "all at once" at top level. Given an ordinary character stream, there should be a smple way to compose the read stream over it.  Date: 12 April 1981 16:51-EST From: George J. Carrette To: MOON at MIT-MC cc: BUG-LISPM at MIT-MC, LISP-FORUM at MIT-MC Re: Splicing macro's & CGOL. After thinking about your note I see that doing something like setq'ing a readtable can be made to be exactly like what I want. In Maclisp what CGOL does is setq the variable READ of course, to #'CGOLREAD, with the problem being that it is a global side-effect. The advantage is that as you mention you do not have to read all the forms and buffer them back. What if instead of a global variable READ (or READTABLE) we had a READ-STACK that was an instance variable of the STREAM being read from? The :READ message would funcall whatever was on the top of the stack, and read-macros could do (PUSH #'FOOREAD (STREAM-READ-STACK STREAM)). [Maclisp aught to be able to handle this too.] Perhaps then adopt sharp-sign-alt-mode for introducing ALTernate readers. #CGOL meaning push #'CGOLREAD on the stack. CGOL exits with the form =EXIT. This aught to work for both terminals and files. "-*- Syntax:CGOL -*- " is very nice, although [1] Maclisp doesn't support it. [2] It is advantageous to start out in Lisp because it is easier to write macro's and system interface stuff in Lisp than in CGOL. This is a serious consideration, CGOL alone has enough syntax to make certain things painfull to do. -gjc 5Date: 14 AUG 1980 1709-EDT From: KMP at MIT-MC (Kent M. Pitman) I have created a new mailing list called LISP-FORUM which is the approximate union of BUG-LISPM, NIL, and BUG-LISP. It exists only as LISP-FORUM@MC -- additions/deletions should be done on MC's mailing list file. Let's use this to discuss cruft like what follows in paragraph 2. Paragraph 2: More on what was discussed in Paragraph 1 ------------------------------------------------------ Is there anyone that is opposed to having #/char or #\char return a character object (on systems with no char objects, a fixnum; on systems with char objects, a NON-FIXNUM char object)? JONL tells me that the Lisp Machine people are the ones that have to be convinced in order for it to be put in. Can we hear feedback from anyone that objects to this? ie, people who rely intimately on #/ returning a fixnum and whose code will be expected to be transportable and whose code will be broken by such a change? I so far haven't heard anyone but JONL voice a negative opinion on this and he only to say that the LispM people would never go along with it. As JAR points out, no changes to code would be needed except to code that wants to be compatible with new systems that offer char objects. -kmp  Date: 14 AUG 1980 1734-EDT From: JONL at MIT-MC (Jon L White) Re: #/char 1) People who have any code using #/... generally have code that works arithmetic on these things. You guys who want to make NIL incompatible with the rest of the world, by not having #/... return a fixnum, will either make it impossible for NIL to absorb code written by J-random user of LISP/MACLISP, or else you aspire to convincng *everyone* who's ever used #/... to go back and check their code for fixnum-dependencies. 2) Is this whole issue the hopless morass of not enough graphic characters again? Martin, Szolovits, and Sussman have designed a "logical" character set scheme which allows 7- or 8- bit graphics for information interchange (in files, over nets, etc), but permits a user-tailored mapping to-and-from a larger set (say 8- bit or even 12 bit, a la APL). Similar "mapping" schemes have been in use for a long time in the IBM world, where EBCDIC is only one of the several mappings into the 8-bit alphabet. This sort of solution is necessary in the long run, and is clearly superior to forcing every possible new character usage into some # format. By the bye, I've already programmed up the Martin-Solovits-Sussman scheme in the NIL reader. 3) More against the conservative-minded approach of making every new application take a # approach: The NIL vector syntax works because there is simple extension of any paired bracket -- namely for { and }, do #{ and }; or in the vector case, we have "#(" paired with ")". Yes, this is a very limited approach, but it works for 1 or 2 new data types. The idea of making #. be the general approach appears attractive, e.g. #.(MAKE-FOO-DATUM ...) but it takes something more than a sophomoric mind to see the worthlessnes of this "hook" as a general information interchange standard.  Date: 14 August 1980 17:43-EDT From: Alan Bawden Re: character objects & randomness Character objects are a loss. Characters are fixnums. You perform subtraction to upper case them. You add 7 to #/0 to generate #/7. You use > and < to determine if they are alphabetic. Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. On another topic: (eval-when (load) (eval-when (eval load compile) )) Should it evaluate at compile time? I can think of arguments both for and against.  Date: 14 AUG 1980 2005-EDT From: MOON at MIT-MC (David A. Moon) Re: #/ returning a "character object" This is acceptable and reasonable if character objects work as array subscripts and fixnum arithmetic can be done on them. Otherwise it is an unreasonable incompatibility.  Date: 14 August 1980 20:49-EDT From: Kent M. Pitman To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC No, #/ would *not* generate symbols. Only in languages with datatype for character objects (ie, in NIL) would there be a difference. -kmp ps to all: A LISP-FORUM mailing list now lives on both MC and AI. The transcripts will continue to live only on MC in MC:LSPMAI;LISP FORUM  Date: 14 August 1980 23:10-EDT From: Robert W. Kerns To: KMP at MIT-MC cc: ALAN at MIT-MC, LISP-FORUM at MIT-MC, NIL-I at MIT-MC Re: Confusion over character objects #/ *NEVER* generates anything except fixnums. In NIL, MACLISP of any flavor, FRANZ, or LISPM. The NIL syntax for character objects is ~x, not #/x. The reason for character objects is that you can't tell that '111' is a representation of a character by looking at it, let alone that it is 'o'. It is not necessary to have character objects to have character-set independence. All that is required is that your references to whatever representation of characters actually use that character so that converting your source file converts the references. #/o satisfies that.  Date: 14 August 1980 23:18-EDT From: Robert W. Kerns To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects & randomness Date: 14 August 1980 17:43-EDT From: Alan Bawden To: LISP-FORUM Re: character objects & randomness Character objects are a loss. Characters are fixnums. You perform subtraction to upper case them. You add 7 to #/0 to generate #/7. You use > and < to determine if they are alphabetic. Not on an EBCDIC machine you sure don't. It would be nice if EBCIDIC were flushed, but IBM isn't about to evaporate. Transportability aside, there are good CODING STYLE arguments for character objects. It is damned useful to know by inspection of the code that the 'fixnum' being handled is in reality A REPRESENTATION OF A CHARACTER! Even better, if your program is writing a file to be read on another machine, it is nice if PRINT knows they are characters! A character is not JUST a fixnum. A fixnum can REPRESENT a character, but it does not have the IDENTITY of a character. Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. As in my previous note, this is somebody's brain bubble. Nobody, absolutely nobody, has ever suggested to my knowledge making #/ generate "character objects".  Date: 14 August 1980 23:34-EDT From: Robert W. Kerns To: CWH at MIT-MC cc: MACSYMA-I at MIT-MC, NIL-I at MIT-MC, LISP-FORUM at MIT-MC, CSVAX.jkf at BERKELEY Re: Character objects OK, so somebody DID actually suggest it, and I just overlooked it in my first pass over my mail. I personally don't care too much what the syntax of a character object is as long as it's not changed every week or so. I'm suprised that you would suggest that TYI be changed incompatibly, Carl! After all, you've worked enough with transporting MACSYMA. Think how many pieces of code there are in the world to be transported that would have to be converted! And never mind that there are those die-hards who'd ALWAYS want to use fixnums, there are times where a fixnum is exactly what you want, such as array indexing. And I suspect that 95% of the old code written using these primitves would correctly, as long as things like CHAR-EQUAL were used. what a self refuting statement! I'll bet that 99.99% of the old code written using those primitives DOESN'T use CHAR-EUQAL !!!  Date: 15 August 1980 14:58-EDT From: Jonathan A. Rees To: RWK at MIT-MC cc: ALAN at MIT-MC, LISP-FORUM at MIT-MC Re: character objects & randomness Date: 14 August 1980 23:18-EDT From: Robert W. Kerns Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. As in my previous note, this is somebody's brain bubble. Nobody, absolutely nobody, has ever suggested to my knowledge making #/ generate "character objects". Kerns, you are completely wrong that no one has suggested a different meaning for #/. Steele proposed over a year ago that #/ return "characters," which on systems that provoided them would mean "character objects," but would otherwise probably mean "fixnums." (Please read MC: NIL; CHPROP >.) I have been pushing this for months. (I agree with Alan that Maclisp should NOT provide character objects as such. But adherence to a character standard is an independent issue from that of the existence of character objects.)  Date: 15 August 1980 15:25-EDT From: Jonathan A. Rees To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects & randomness Date: 14 August 1980 17:43-EDT From: Alan Bawden Character objects are a loss. Characters are fixnums. You perform subtraction to upper case them. You add 7 to #/0 to generate #/7. You use > and < to determine if they are alphabetic. Not to argue the case for them once again, but a complete character standard includes all the primitives you want for manipulating them - at the lowest level, things like CHAR-CODE (to coerce to a fixnum), CHAR< (for comparison), maybe even CHAR+ (for adding fixnums to them); at a higher level, CHAR-UPCASE and CHAR-DOWNCASE for case conversions, ALPHA-CHARP or whatever for testing alphabeticalness, UPPERCASEP for testing case, functions for converting digit characters to integers, and so on. Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. I don't think that anyone (except maybe Fateman) refers to symbols when using the term "character object." The idea is that they're implemented as fixnums (hopefully immediate pointers) with a funny type code. If you've been thinking they're symbols, I can see why you'd be grossed out.  Date: 15 August 1980 16:13-EDT From: Alan Bawden To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects & representation issues Date: 14 August 1980 23:18-EDT From: Robert W. Kerns Transportability aside, there are good CODING STYLE arguments for character objects. It is damned useful to know by inspection of the code that the 'fixnum' being handled is in reality A REPRESENTATION OF A CHARACTER! Indeed, one of the reasons for "#/" is to flag those fixnums that are being used as representations of characters. This is a "good CODING STYLE argument" for "#/", not for character objects. Even better, if your program is writing a file to be read on another machine, it is nice if PRINT knows they are characters! A character is not JUST a fixnum. A fixnum can REPRESENT a character, but it does not have the IDENTITY of a character. I frequently use lists to REPRESENT things. The lists usually don't have the IDENTITY of the things they represent (I don't know exactly what this means anyway), and they print as lists too, but this isn't an inconvenience. And there is this bonus that I can use these functions named CAR and CDR and RPLACA to examine and modify them! Amazing isn't it, without adding a single new data-type or function to the language I can talk about new things! Of course it is true that PRINTing something from MacLisp and then READing it into some other machine (even a LispMachine) can produce problems with your character set. But this is an amazingly rare occurence, and involves other problems as well... (are #\lambda, #\backspace and #/H all going to be diffent (non-EQ) character objects? On the LispMachine they are all different, and the last one isn't really a full-fledged character.)  Date: 15 August 1980 16:42-EDT From: David A. Moon To: JAR at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects Next you're going to tell me that there is CHARARRAYCALL if I want to use a character as an index into an array, CHARLOGXOR if I want to complement a bit such as alphabetic case or meta in a character, CHARASSQ if I want to look one up in an a-list, maybe even CHARFUNCALL if I want to pass one as an argument? Isn't all this an entirely excessive amount of mechanism for a partial solution to a very small part of the problems encountered when transporting between systems with different environmental conventions?  Date: 15 August 1980 1739-EDT (Friday) From: Guy.Steele at CMU-10A Re: Bawden's remarks on characters While character *objects* as an explicit data type may be a loss (I don't think so, but only marginally), the *idea* of characters is certainly a distinct idea from that of fixnums. If you disagree, then I claim that the difference between characters and fixnums is not very different from that between fixnums and flonums -- they are both bit patterns, right? -- indeed, Gosper and Kahan get a lot of mileage out of doing ADD and TLNE instead of FADD on flonums -- so maybe we should just go back to assembly language. Sure, you can add 7 to #/0 to get #/7. You can also subtract a constant to convert to upper case. But which constant? For ASCII, it is 32 decimal. For EBCDIC, it is -64 decimal (in EBCDIC, lower-case letters have smaller numeric codes than upper case ones). Does this constant work for Greek fonts? Yes, adding 23. to #/A to get #/X is a neat crock. So is Kahan's SQRT algorithm. The latter is somewhat contained, however, while the former is, I suggest, too much more pervasive. Writing (CODE-CHAR (+ 23. (CODE-CHAR #/A))) is not that much more work, is clearer as to what is going on, and no less efficient when compiled (on the LISP Machine CODE-CHAR and CHAR-CODE are probably identity macros -- but by putting in the calls to identify the explicit coercions we give the code half a chance of being compatible witrh other implementations).  Date: 15 August 1980 1743-EDT (Friday) From: Guy.Steele at CMU-10A Re: #/ returning a "character object" Regarding Moon's note: I believe the character proposal CHPROP calls for there being a kind of array that can be subscripted by characters. This is not necessarily distinct from other kinds of arrays, and there need not be a way to distinguish a character-indexed array from a fixnum-indexed array -- just a way to ask for an array indexed by characters. Altyernatively, one can always "convert" the characters to fixnums by using the CHAR-CODE function (which would probably be the identity on the LISP Machine).  Date: 15 August 1980 1754-EDT (Friday) From: Guy.Steele at CMU-10A Re: character objects & randomness Making #/ generate character objects did not originate in CHPROP. I am quite certain that back in the early design of NIL (1978) that we decided that #/ would read as a character object (and, against my weak protests, that ~ would also -- my own position is that there is no point in gobbling ~); to read a fixnum equivalent to a character there would be the syntax #=/x, and to read a character equivalent to a fixnum (how awful!) there would be #=nnn. CHPROP's contribution was to note that one can make a *conceptual* distinction between characters and fixnums whether or not an *implementational* distinction is drawn. Therefore CHPROP suggests that #/x mean "that object which in the current implementation is standardly used to represent the character x", whether that be a fixnum, a special character data type, a symbol, or whatever. Additionally, #=/x is guaranteed to be a fixnum, and #=nnn is like #/x where #=/x equals nnn. By the way, the obvious extension is that #=\RUBOUT works just like #=/x, though I haven't edited CHPROP yet for that.  Date: 15 August 1980 1810-EDT (Friday) From: Guy.Steele at CMU-10A To: MOON at MIT-MC (David A. Moon) cc: lisp-forum at MIT-MC Re: character objects CHARARRAYCALL is silly. One can instead specify that arrays permit characters as well as fixnums as indices, and implicitly "coerce" them (in many implementations this will be the identity coercion). On the other hand, CHARLOGXOR is clearly a loss, and I would suggest that writing (CODE-CHAR (LOGXOR nnn (CHAR-CODE c))) at least makes it clear that you're hacking around with the representation. On the third hand, if you're doing LOGXOR is is probably for the explicit purpose of changing case or hacking a control bit, and the proposal CHPROP provides for operations to do these common things. I think I would rather write (UNMETA (CHAR-UPCASE x)) than (LOGXOR 1040 x). Certainly all this stuff only solves part of the transportability problem, but it is a start. In particular, this problem is a great impediment to writing a transportable editor, which I would like to see happen. I I thyink JAR pointed out, having a character standard and having character objects are distinct notions. The former is very important, the latter merely nice for some purposes.  Date: 15 AUG 1980 1745-EDT From: RMS at MIT-AI (Richard M. Stallman) Re: generic types vs. specific types The use of a small set of generically useable types is one of Lisp's best features. In other languages which have user-defined list structure data types, because each application uses a nominally new and different type (even though it is for something that Lisp lists would work fine for), all the wheels have to get re-invented for it. The same goes for things that are really numbers. It would be a bad idea to have a different data type for characters if that means that all the fine old numeric functions don't work on them. The user should not have to have a whole new set of CHAR-+, CHAR-<, CHAR->, CHAR--, CHAR-LOGAND, CHAR-BOOLE, etc, etc. But if objects of the new character data type are accepted by everything that would work on a number, then they might be a reasonable idea. Aside from a few things like PRINT, everything else should treat a character object just like the fixnum we use now. Then the new data type will not be a burden on anyone. However, don't expect it to solve problems of transfer between character sets except of the most minor sort.  Date: 15 August 1980 18:45-EDT From: Mike McMahon Sender: MMcM at CADR6 at MIT-AI To: Guy.Steele at CMU-10A, lisp-forum at MIT-MC Re: Bawden's remarks on characters Date: 15 August 1980 1739-EDT (Friday) From: Guy.Steele at CMU-10A Yes, adding 23. to #/A to get #/X is a neat crock. Writing (CODE-CHAR (+ 23. (CODE-CHAR #/A))) is not that much more work, is clearer as to what is going on, and no less efficient when compiled (on the LISP Machine CODE-CHAR and CHAR-CODE are probably identity macros -- but by putting in the calls to identify the explicit coercions we give the code half a chance of being compatible witrh other implementations). (CHAR-CODE (+ 23. (CODE-CHAR #/A))) is #/Q in EBCDIC. #/A is 301 and #/X is 347. In other words, the new functions haven't gained you anything in transportability of code that chooses to do character arithmetic. What we have here it a solution in search of a problem.  Date: 16 August 1980 18:12-EDT From: Daniel L. Weinreb cc: LISP-FORUM at MIT-MC Re: character objects & representation issues I have heard a lot of discussion here, and I think some of it arises from people's not understanding what is being proposed here. For example, of course converting a character into a fixnum, adding a constant, and converting it back is not a character-set-independent way to uppercase something. Nobody ever said it was; the proposal was to have a CHAR-UPCASE function. On the other hand, you can certainly have such a function even if there is no distinction made between fixnums and characters. When this discussion first started, I thought RWK was advocating the virtues of having character objects be a distinct data type so taht it might be possible to check at runtime whether a given Lisp object was a character object as opposed to a fixnum, and so that functions like CHAR-UPCASE might be able to detect fixnum arguments as errors, thus helping out a user who is mistakenly trying to uppercasify the number of or something. However, in recent messages, GLS seems to be saying that his idea of character objects does not require that it be possible to distinguish characters from fixnums at runtime. I am not sure what this is all about. Perhsps the idea is that one can write code that will work in two different Lisps, one in which charcter objects are really a diffrent data type, and one in which they are not; but that any functions that distinguished betwen the data types at runtime could NOT be used in such code, as they would work only in the first dialect but not in the second. Then users of the first dialect would gain the advantages and disadvantages of having separate character types, and users of the second would not, but it would still be possible to write portable code. Existing Lisp Machine code would have to be modified only if someone wanted to make it portable, and the modifications would be to add some coercion functions around arithmetic uses of characters, or to change those arithmetic uses to functions like UNMETA and CHAR-UPCASE and such. (The latter is not a bad idea anyway.) Maybe I misunderstood one of them, or maybe they are not expounding the same proposal. The proposal that I am construing from GLS's remarks sounds pretty reasonable to me. Lisp Machine people should note that my construal requires no change in any code UNLESS you are interested in being portable. More than one person has said "this won't really solve portability anyway". While it is certainly not a panacea, it does solve some problems. (1) When you debug your portable code in NIL, you get NIL's extra error checking -- but the code works on the LispM too. (2) You can uppercasify characters and it will still work in EBCDIC. (As I said, this could also be done without character objects, by providing CHAR-UPCASE for fixnums; but the important thing is to CLEARLY DEFINE a portable set of allowable operations, so that it is possible to write code and be reasonably sure it will stand a good chance of running in other character sets.)  Date: 16 August 1980 18:23-EDT From: Daniel L. Weinreb Re: generic types vs. specific types It would be a bad idea to have a different data type for characters if that means that all the fine old numeric functions don't work on them. The user should not have to have a whole new set of CHAR-+, CHAR-<, CHAR->, CHAR--, CHAR-LOGAND, CHAR-BOOLE, etc, etc. Suppose that I, for some reason, WANT to write a portable Lisp editor that wil work on an EBCDIC system. In general, suppose that I want to write code that will work in any character set. In taht case, I have no business doing CHAR-BOOLE; it is obviously not a meaningful or defined thing to do to a character. Of the functions you mention, indeed CHAR-< and CHAR-> would be needed (although if > works on fixnums and flonums and bignums equally well there is no reason for it to go to the trouble of failing on character objects), because the concept of a collating sequence is usually useful and well-understood. Let me repeat that whatever happens, I would like to see defined Lisp functions to do all those things we usually do by doing arithmetic on characters, such as uppercasing, converting from numbers to digits (as in (+ #/0 x)) and so on. Having these functions, and defining them consistently with NIL, plus the inclusion of some NIL-compatible identity macros, MIGHT be all we need to resolve this whole question.  Date: 16 August 1980 18:42-EDT From: Robert W. Kerns To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects & representation issues Date: 15 August 1980 16:13-EDT From: Alan Bawden To: RWK cc: LISP-FORUM Re: character objects & representation issues Date: 14 August 1980 23:18-EDT From: Robert W. Kerns Transportability aside, there are good CODING STYLE arguments for character objects. It is damned useful to know by inspection of the code that the 'fixnum' being handled is in reality A REPRESENTATION OF A CHARACTER! Indeed, one of the reasons for "#/" is to flag those fixnums that are being used as representations of characters. This is a "good CODING STYLE argument" for "#/", not for character objects. OK, so I was being a little fuzzy. You KNOW this argues for CHAR-= and CHAR-<, rather than for the desireability of characters being represented as a character object. It was just a lead-in to the actual argument for characters, which you've kindly reproduced below. Even better, if your program is writing a file to be read on another machine, it is nice if PRINT knows they are characters! A character is not JUST a fixnum. A fixnum can REPRESENT a character, but it does not have the IDENTITY of a character. I frequently use lists to REPRESENT things. The lists usually don't have the IDENTITY of the things they represent (I don't know exactly what this means anyway), and they print as lists too, but this isn't an inconvenience. And there is this bonus that I can use these functions named CAR and CDR and RPLACA to examine and modify them! Amazing isn't it, without adding a single new data-type or function to the language I can talk about new things! Of course it is true that PRINTing something from MacLisp and then READing it into some other machine (even a LispMachine) can produce problems with your character set. But this is an amazingly rare occurence, and involves other problems as well... (are #\lambda, #\backspace and #/H all going to be diffent (non-EQ) character objects? On the LispMachine they are all different, and the last one isn't really a full-fledged character.) The bit about identity was brought up by others who flamed that a character and a fixnum were identical. I assert that the CONCEPTS *ARE* >>DIFFERENT<< ... (otherwise, we'd just have one word, right?). If you don't understand what I mean by "identity", it's because it's too obvious. You may think that transporting code is an "amazingly rare occurence". I somehow doubt I would be amazed. You don't have Fateman hassling you for magtapes. You may thrill to the idea of using indistinquishable lists to represent n different things, but I'd much prefer to be able to tell my objects apart. I like to talk about pieces of my objects by name, it's much more friendly. I use one function to modify them: SETF. I use lists to build lists of things. I think it's a nice, simple, consistant world view, no? But anyway, for DLW's sake, I think it is clear that being character-set independent and readable in your code, via #/x and things like CHAR-= is far more important than having character objects. And even if you don't like your code to be readable (sorry), and your code isn't transportable anyway, it can't hurt you for those functions to exist for those who DO care about such things. I think DLW got the wrong impression about why I want character objects: I don't care much about runtime type-checking. I DO care about being able to tell what's going on while debugging. I find having objects print out in a way that relates to their meaning very helpful. I also brought up the point that they provide an additional type of transportability: Files written by PRINT. These are definitely two independent points, and I may have confused some by bringing them up together. I would like to extend DLWs point about character functions not requiring you to re-write your code to character OBJECTS. If you have code that uses TYI, =, <, etc., you'll just deal with characters as fixnums. Now if I write (DEFSTRUCT CHARACTER (CODE)) (or whatever the syntax is) and define functions INCH to return them, and OUCH to output them, and make things which take fixnums for character purposes (like string-manipulators, TYO, etc.) also work on characters. In short, if I make things which take fixnums to represent characters ALSO take my 'CHARACTER's, what breaks? Nothing. Now there are those who would replace fixnums as characters completely, making TYI return a character object. (Actually, if we wanted to, we could make most reasonable uses of character objects replace fixnums, such as vector/array references, arithmetic (as long as you add fixnums to a single character, not two characters, which is nonsense anyway)). But I take a more middle ground (yes, I know I'm supposed to be a radical but I'm getting old) and only suggest AUGMENTING the language, not REDOING it. I would like to repeat a point repeated by GLS (I won't recurse further but GLS originated it ultimately): It is more important that there be a character STANDARD than there be character OBJECTS. They are independent issues which happen to overlap in the problems they attack. I argue for both of them.  Date: 16 August 1980 18:52-EDT From: Daniel L. Weinreb To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-AI Re: Bawden's remarks on characters Yes, adding 23. to #/A to get #/X is a neat crock. So is Kahan's SQRT algorithm. The latter is somewhat contained, however, while the former is, I suggest, too much more pervasive. Writing (CODE-CHAR (+ 23. (CODE-CHAR #/A))) is not that much more work, is clearer as to what is going on, and no less efficient when compiled... All this is true, but did you really mean to suggest that writing that is the right thing? It sure doesn't help achieve portability, as MMcM points out. Maybe you weren't thinking very hard when you wrote this?  Date: 16 August 1980 19:20-EDT From: Daniel L. Weinreb To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: character objects & representation issues Ah. I see. You mention to motivations for character objects; one works fine in "my" proposal (the one I construed out of what I think GLS is saying; I take no credit), but the other has problems. First, you want character objects for debugging, so that when you see a capital A you can tell that it is a capital A rather than a 101 octal. With "my" proposal, you do indeed get this, so long as you do your debugging in NIL rather than on the Lisp Machine. I don't think anyone has objections to this. Secondly, you want character objects to print differently. Presumably a capital A prints as #/A, rather than as 101. This would not work in "my" proposal. If you ran the portable FOOBAR on the LispM, and it prints a data structure, and then you run FOOBAR in NIL and it reads it back in, then character objects can get converted to fixnums. I don't see any solution to this problem, other than to meekly suggest that maybe it is just not a very common thing. Code that wants to print character for human eyes can be made portable by going through a FORMAT control option whenever printing character objects, rather than ever using PRINT directly.  Date: 16 August 1980 20:18-EDT From: Kent M. Pitman To: LISP-FORUM Re: Proposal #3 for MacLISP characters... I submit the following as a possible plan for Maclisp characters as it describes a useful functionality without extremely little overhead. For Maclisp, we could even CONS a set of fixnums (values #o0 through #o177 which were unique from the ones normal read would give). (Mis)Features: * #/A ... etc could return those. INCH could also return from that table. * The printer could know about them -- or at least GRINDEF. * CHARACTERP or whatever could obviously be trivially defined and would be useful for debugging. In compiled code, it would probably want to just be a FIXP test tho'. * `Character' fixnums would not be EQ to `normal' fixnums (this was never previously guaranteed anyway) but the mathy operations would still work on them. * CHAR-UPCASE or whatever you call it could return a `character' fixnum but if you did normal math (getting back a normal fixnum) you'd have an object which still worked ok for maclispy operations, but which OUCH could be made to complain about non `character' fixnums -- good for debugging code to be later moved to less tolerant systems. (TYI would of course take either kind of fixnum.) * FIXP would still return T, so CASEQ would not complain about using characters. ... Some other code which wanted characters to not be FIXP would possibly lose -- I suspect such code would be in the minority and hope that most such code would still usefully accept fixnums. eg, (DEFUN FOO (X) (COND ((CHARACTERP X) ...) ((FIXP X) (FOO (CHARACTER X))) ...))  Date: 16 August 1980 21:08-EDT From: Richard J. Fateman Re: characters One of the nice things about lisp is that objects can have properties. For example, some representations of characters have widths associated with them. Others are digits, special characters, or alphabetic. While collating sequences are useful, they are not the only thing... Some people would even like to have property lists for numbers, you know. I know there are other ways of associating information with objects, but if readc returns a symbol (atom; pointer if you will) it all fits together. It may sound revolutionary to this group, but maybe, in the face of an actual application involving characters, one would be wise to avoid LISP, and use a language with other characteristics. Some implementations of LISP can be used fairly comfortably with other languages. Certainly if efficiency in reading/writing is of critical importance, the "unspoken alternative" of not using LISP is worth considering. In the absence of an actual application, discussions can go on indefinitely.  Date: 16 August 1980 21:23-EDT From: Robert W. Kerns To: JAR at MIT-MC cc: LISP-FORUM at MIT-MC Re: "completely wrong" Date: 15 August 1980 14:58-EDT From: Jonathan A. Rees To: RWK cc: ALAN, LISP-FORUM Re: character objects & randomness Date: 14 August 1980 23:18-EDT From: Robert W. Kerns Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. As in my previous note, this is somebody's brain bubble. Nobody, absolutely nobody, has ever suggested to my knowledge making #/ generate "character objects". Kerns, you are completely wrong that no one has suggested a different meaning for #/. Steele proposed over a year ago that #/ return "characters," which on systems that provoided them would mean "character objects," but would otherwise probably mean "fixnums." (Please read MC: NIL; CHPROP >.) I have been pushing this for months. (I agree with Alan that Maclisp should NOT provide character objects as such. But adherence to a character standard is an independent issue from that of the existence of character objects.) Hey, the context was the discussion, not "has ever suggested it anywhere", and I have already acknowledged that CWH had suggested it in the discussion and I was wrong. Inaccurate. Full of shit. Whatever. I am aware that the idea was brought up a year-and-a-half ago or whenever, and laid to rest. I would like to lay it back to rest. (The specific idea the #/ and TYI return CHAROBS). [Also, I have read CHPROP several times. Your wording seems to imply that you have been trying to get me to read it for months. I don't think that is what you intended to imply, but the ordering of your words implies it.] Jonathan's note points out a problem with CHPROP -- it uses the term "character object" to denote whatever representation of a character you're using, a fixnum, symbol (ugh) or whatever. This conflicts with the meaning that I think everyone in this discussion uses (at least most of the time) of a special datatype just for characters. If I'm wrong, it just proves the point; confusion results. But as JAR has pointed out on many occasions, it doesn't actually require that a given machine implement a special datatype; you can use fixnums if you want to. If you read it carefully it carefully defines "character object" for the purpose of the paper. It is a "standard", and it takes pains to be LISPM compatible, MacLisp compatible, etc. I add my voice to JAR's in urging people to read and adopt it. Date: 16 August 1980 19:20-EDT From: Daniel L. Weinreb Subject: character objects & representation issues To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Ah. I see. You mention to motivations for character objects; one works fine in "my" proposal (the one I construed out of what I think GLS is saying; I take no credit), but the other has problems. First, you want character objects for debugging, so that when you see a capital A you can tell that it is a capital A rather than a 101 octal. With "my" proposal, you do indeed get this, so long as you do your debugging in NIL rather than on the Lisp Machine. I don't think anyone has objections to this. Secondly, you want character objects to print differently. Presumably a capital A prints as #/A, rather than as 101. This would not work in "my" proposal. If you ran the portable FOOBAR on the LispM, and it prints a data structure, and then you run FOOBAR in NIL and it reads it back in, then character objects can get converted to fixnums. I don't see any solution to this problem, other than to meekly suggest that maybe it is just not a very common thing. Code that wants to print character for human eyes can be made portable by going through a FORMAT control option whenever printing character objects, rather than ever using PRINT directly. If the information as to what's a character and what's a fixnum is lost, no amount of TECO-MADNESS-FORMAT-TWIDDLY-CONTROL-STRING can fix it back up. Anyway, I have stated (several times, but it got lost in the verbiage) that I do not think #/A should return anything othr than 101. ~A is the representation for a character object (or #~A if you like). Too much code already expects #/A to be a fixnum. But of course, if you run portable FOOBAR on the LispM you can't expect to get all the features of running in NIL. But only two things would be hurt by it: Human eyes, and transporting to someplace with a different character set. As you say, not too important. So anyway, I'll run and shout "CHAROBS WIN, CHAROBS WIN!", but if the LISPM people don't want them, it's pretty clear I won't sit in a hole and cry. But I think that we DO have a crying need for a standard like MC:NIL;CHPROP >.  Date: 16 AUG 1980 2218-EDT From: RMS at MIT-AI (Richard M. Stallman) Re: "And even if you don't like your code to be readable (sorry)" If you were able to figure out that you should put a "sorry" in the phrase, why not go one step farther and not say it at all. There is no unanimity about what difference (if any) these stylistic questions make in program readability. In any case, what to do about characters is such a minor part of Lisp that no matter what is done it will not affect the readability of any large program more than a small fraction. So the statement I quoted is both arrogant and exagerated. This is not the way to move a discussion any useful end.  Date: 08/17/80 06:12:35 From: RMS at MIT-AI Re: CHPROP problems Here are some problems with CHPROP. 0) There is no function for turning a character into a fixnum, or vice versa, assuming those data types are different. These operations are important. CHAR-CODE returns only part of the character. 1) There is no need for functions CHAR<, CHAR=, CHAR>. The functions <, = and > should be used for this, and they should be willing to compare a character object with a number if those data types are different. Having separate functions for comparison of characters does not accomplish anything, because on systems which implement characters with fixnums, they would be the same as the fixnum comparison functions anyway, and on systems with character objects there is no difficulty with making the functions "<", etc., generic. 2) #/x is supposed to return a character object if they exist. It sounds like people agree now that this should not happen, so why not fix CHPROP? 3) #=nnn is superfluous when there is #/nnn. 4) The convention of a package called "*" seems like a bad idea. I don't think the Lisp machine will adopt it. 5) Right now on the Lisp machine there is one sort of character which can have bits and another which can have a font. There is no sort which can have both. It might be an improvement if there were, but it might also be a lot of work. This might prevent the Lisp machine from adopting CHPROP.  Date: 17 August 1980 08:24-EDT From: Carl W. Hoffman RMS@MIT-AI 08/17/80 06:12:35 Re: CHPROP problems 0) There is no function for turning a character into a fixnum, or vice versa, assuming those data types are different. These operations are important. CHAR-CODE returns only part of the character. What about FIX and CHARACTER?  Date: 17 August 1980 23:10-EDT From: Robert W. Kerns To: RMS at MIT-AI cc: LISP-FORUM at MIT-MC Re: CHPROP problems Date: 08/17/80 06:12:35 From: RMS at MIT-AI Re: CHPROP problems Here are some problems with CHPROP. 1) There is no need for functions CHAR<, CHAR=, CHAR>. The functions <, = and > should be used for this, and they should be willing to compare a character object with a number if those data types are different. Having separate functions for comparison of characters does not accomplish anything, because on systems which implement characters with fixnums, they would be the same as the fixnum comparison functions anyway, and on systems with character objects there is no difficulty with making the functions "<", etc., generic. Because when I look at code and see "<", I think "Aha, fixnums are being compared", and when I see "CHAR-<" I think "Aha, characters are being compared. Besides, I'm not too sure what "CHAR-<" should do in an EBCDIC environment. Maybe it should do the right thin and simulate ASCII? 2) #/x is supposed to return a character object if they exist. It sounds like people agree now that this should not happen, so why not fix CHPROP? GLS? 3) #=nnn is superfluous when there is #/nnn. What is #/nnn besides the fixnum-represented character "" followed by nnn? Is this a typo for #/nnn? I do think #= is superfluous. 4) The convention of a package called "*" seems like a bad idea. I don't think the Lisp machine will adopt it. I've argued this for years. I think the name "*" is bad, but not the idea of a separate package for machine dependencies. But I'd rather see SOME standard adopted. If you don't like "*:", please make an counter-proposal. 5) Right now on the Lisp machine there is one sort of character which can have bits and another which can have a font. There is no sort which can have both. It might be an improvement if there were, but it might also be a lot of work. This might prevent the Lisp machine from adopting CHPROP. Can you determine a subset of CHPROP which you could adopt? Modifications to CHPROP which would aid the LISPM in adopting it? Perhaps the notion of bits and fonts could be collapsed into one somehow?  Date: 08/17/80 23:23:54 From: TK at MIT-AI Re: Character Objects (what else!) Since Lisp-forum has clogged my mailbox nonstop for the past week, I guess there is room for my paltry contribution. The root cause of all this confusion is that both sides are correct. One wants both the efficiency of representing characters by numbers, so that primitive (and presumably fast) operations such as + work on them, and your debugger (and you) want them printed as characters. This could work in two ways. One is to introduce yet another "hardware" datatype which is interpreted correctly at run time by all the primitives such as +, <, print, etc. Another equally good way is to retain sufficient information from the source code, so that, despite the fact that there is a fixnum staring at you, you realize that it represents a character. The second of these requires a more sophisticated debugger, along with good hooks into the compiler, but these same hooks are already necessary to support well established programming features such as SETF (or macros in general). My conclusion is that it is overdue that we address this more general problem of dealing with the debugging/compiling support for the syntax/macro features of the language and stop trying to patch our way through the debugging maze by adding more hardware datatypes. Either approach will work for this problem, but as I mentioned above, we already are losing on debugger support for independent language features.  Date: 17 August 1980 23:36-EDT From: Robert W. Kerns To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: "And even if you don't like your code to be readable (sorry)" Date: 16 AUG 1980 2218-EDT From: RMS at MIT-AI (Richard M. Stallman) If you were able to figure out that you should put a "sorry" in the phrase, why not go one step farther and not say it at all. Because it communicated my view of it. The "(sorry)" was to acknowledge that I was just interjecting my view and that you had the right to view it differently. Also, you have excluded the context, which would have shown that it was a play on my wording in the previous sentence. I really can't see this as being arrogant. As shown below, you obviously did not take it that way. There is no unanimity about what difference (if any) these stylistic questions make in program readability. In any case, what to do about characters is such a minor part of Lisp that no matter what is done it will not affect the readability of any large program more than a small fraction. So the statement I quoted is both arrogant and exagerated. The humor intended stemmed from what I had hoped was an obvious exageration. This is not the way to move a discussion any useful end. I apologize for not stating things in a way that you would take as I intended. I don't think my intent requires any apology.  Date: 17 August 1980 23:42-EDT From: Robert W. Kerns To: RJF at MIT-MC cc: LISP-FORUM at MIT-MC Re: characters Properties aren't the only way to associate information with an object. The usual way to do it with characters is by means of an array indexed by the fixnum representation of the character. This has many advantages over property lists, including it being easier to do something for all the characters with a simple DOTIMES.  Date: 18 August 1980 00:33-EDT From: Robert W. Kerns To: TK at MIT-AI cc: LISP-FORUM at MIT-MC Re: Character Objects (what else!) I find TK's approach very interesting. I don't think it is the right one, but there is much in what he says. I do think there are many occasions where there should be more communication between compiler and debugger. But consider that not all data can be immediately traced to the code that produced it. Documentation and "self-knowledge" of objects I think is truly independent of code. It's not clear to me that there is any need for characters to be hardware datatypes, however. 'CONSing' them can mean simply fetching them from an array, and getting a fixnum from them for various purposes (comparisons, etc.) can be just an AREF. The generic operators (+, <, =) would have to know about them if you take RMS's suggestion, of course, but that's the problem with generic operators.  Date: 08/18/80 04:05:21 From: DLW at MIT-AI Re: CHPROP problems 0) There is no function for turning a character into a fixnum... As I understand it, a character is a "record" containing three components; it is supposed to be invisible to portable programs how these fields get combined into a character. So the portable set of functions ought not include such a function as you want; if you did this operation, your program would become non-portable. I may be confused though. 1) There is no need for functions CHAR<, CHAR=, CHAR>. The functions <, = and > should be used for this, I'd like this too; are the NIL people willing to make this work in the VAX and S-1 implementations, or would it slow things down badly? 4) The convention of a package called "*" seems like a bad idea. I don't think the Lisp machine will adopt it. I don't like this either, but I would be willing to put a hack into the package system if it is absolutely necessary for NIL compatibility. But I think the choice of "*:" was not very good, especially since we already had the package system at the time it was chosen. 5) Right now on the Lisp machine there is one sort of character which can have bits and another which can have a font. There is no sort which can have both. It might be an improvement if there were, but it might also be a lot of work. This might prevent the Lisp machine from adopting CHPROP. This is quite explicitly mentioned in CHPROP; did you read all of it? Any code that works by using the %%KBD codes rather than magic constants will not have any trouble; we just redefine those codes. I volunteer to try to find the places that don't use that...  Date: 18 August 1980 04:17-EDT From: Daniel L. Weinreb To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: CHPROP problems Well, do you also object to PLUS, on the grounds that you want to know whether fixnums, flonums, or bignums are being compared? Do you want to ahve a floating point version of >? Some people like generic functions. In this case, I think it is a matter of taste. If necessary, to make everyone happy, we could define that < must work on characters, but that CHAR< exists as an alternate name; I'd prefer not to do this, though, as it is rather inelegant.  Date: 08/18/80 04:21:05 From: DLW at MIT-AI Re: In reply to TK: Character Objects (what else!) Suppose someone has a program that creates a data structure whose format is a list of alternating characters and fixnums. He then runs the program, which returns such a list, and he SETQ's FOO to the list. Now he types FOO at top level. It would take an awfully clever printer/compiler collaberation to figure out how to print this, if characters are fixnums. I don't think characters can ever be printed as such without another hardware data type. (I also think that in this case we should forego the debuggability in order to not put in the new data type, at least on the LispM.)  Date: 18 August 1980 04:44-EDT From: Robert W. Kerns To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC I didn't say I objected to generic functions, I said there are problems with them. What's wrong with doing PLUS on characters is that you can't add two characters meaningfully. This disagrees with the semantics of PLUS in general, so I don't think it should be grouped in. There already *IS* a floating point version of '>', and always has been. Look in the Moonual (if you can find one, sigh...). I agree, generic functions are good things to have. Maybe LESSP should work on characters. Do you think it should allow you to compare characters and fixnums? Characters and flonums? Part of the disagreement stems from another issue -- the LISPM's use of the Maclisp fixnum-only names for the generic operations. Fixnum only operations are necessary in non-LISPM environments. What do you think '<' should do with the font information? Ignore it? If so, NIL can't do it. Even if the font info is included in the comparison, it would take KMP's MacLisp hack for FRANZ and other non-immediate-fixnum lisps to do it. I'm not sure if it's possible for everyone in general to do it. I still think the recommended way should be via CHAR<.  Date: 18 August 1980 09:22-EDT From: Carl W. Hoffman To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Date: 17 August 1980 23:10-EDT From: Robert W. Kerns Because when I look at code and see "<", I think "Aha, fixnums are being compared", and when I see "CHAR-<" I think "Aha, characters are being compared. While I agree with the objections you raised in your most recent message, I disagree with this one. (> NEXT-CHAR #/A) makes it fairly clear that characters are being compared, just as (> ITEM-COUNT ITEM-LIMIT) makes it clear that integral quantities are being dealt with. Granted this is an informal convention, but if type information is present in the operand names, it shouldn't be necessary in the operator names as well.  Date: 18 AUG 1980 1916-EDT From: RMS at MIT-AI (Richard M. Stallman) It seems strange to hear an objection to the use of < for comparing characters based on asking whether it should ignore the font. When I suggest that < be used instead of CHAR<, I mean exactly that. CHPROP says clearly that CHAR< doesn't ignore the font, or any other part of the character. CHAR-LESSP is used for that. I can't believe the suggestion that CHAR< ought to use the ASCII collating sequence on an EBCDIC machine is serious either. Clearly anyone writing a Lisp for such a machine will make CHAR< use the machine's own collating sequence: which is to say, he will make it be the same as comparing the fixnum equivalents of the characters, because that is what the machine's own collating sequence is. If you tried to tell him to make it do ASCII comparisons, he'd probably say "but on our system we don't use the ASCII collating sequence and I want comparing characters in Lisp to be compatible with comparing them in all the other languages on our system". What this implies is that a transportable program shouldn't make assumptions about the order of the collating sequence, regardless of whether the function is called CHAR< or <. For that matter, it must not make any assumptions about the collating sequence in use with CHAR-LESSP. Please don't make the claim that this says anything about the suitability of "<" for transportable programs unless you are willing to accept the conclusion for CHAR< and CHAR-LESSP as well. In general, a program using < is no more and no less transportable than it would be using the name CHAR< instead. By the same token, making use of the fact that a character has an equivalent as a fixnum doesn't imply nontransportability. It might be used to do something nontransportable, but that is not all it is good for. For example, it might be used as an array index, or be stored in a numeric array. It might be used to generate a hash code to index an array, if it has too many bits to use as an index as it stands. These things can be done without depending on the particular value of any character(s). Only arithmetic causes a problem, and then the same problem can occur operating on just the code part of the character. If it is bad to allow the entire character to be turned into a fixnum, it is just as bad to allow CHAR-CODE. It is true that someone might turn a character into a fixnum and then use LDB to extract the font. That would be a mistake, if there is a convenient function for doing just that. Similarly, it is a mistake to use PROG and GO to do something that is a trivial COND; but that is no reason not to have PROG and GO. Given that a character either IS a fixnum, or is a fixnum with a funny data type field, there should be no obstacle in the way of making use of this fact.  Date: 18 AUG 1980 2215-EDT From: RMS at MIT-AI (Richard M. Stallman) There is no more need for a special package for symbols like the one giving the maximum character value than there is for the symbol for the function to get the first element of a list. Both of these symbols have system-dependent definitions. Both of them have a system-independent meaning. Perhaps there should be a name prefix for symbols giving values useful in connection with characters. Perhaps "CHAR-" is good enough for that.  Date: 18 August 1980 23:08-EDT From: Daniel L. Weinreb To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC What's wrong with doing PLUS on characters is that you can't add two characters Of course. I never said PLUS should work on characters; I was just using it as an example of generic functions! There already *IS* a floating point version of '>', and always has been. v beg your pardon, page 59 of the old Moonual says that ">" works for pairs of fixnums or for pairs of flonums. What version do you ahve in mind; have things changed since that manual? it should allow you to compare characters and fixnums? Characters and flonums? Of course not; I just want it to compare characters with each other. What do you think '<' should do with the font information? As RMS said, it should do what CHAR< is defined it to: be undefined.  Date: Tuesday, 19 August 1980 11:44-EDT From: Chris Ryland Re: amazing... Gee, it looks like we're rediscovering typed languages here (and I'm not referring to compiled languages; there are plenty of typed, interpreted languages resembling Lisp: ECL, PPL, POP-2, etc). You know, people have agonized over these decisions hundreds of times in as many languages: how to deal with characters, how objects should print, etc., and they've come up with some reasonable methods for dealing with them in clean ways. Lisp is truly a wonderful language; too bad it's taken so long to start facing these issues.  Date: 19 August 1980 12:37-EDT From: Robert W. Kerns To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC Re: Red face I am guilty of flaming without looking. Indeed, MacLisp does NOT have a $< function, it uses <. This won't work in NIL, since fixnums are immediate quantities and flonums are TWO-WORD non-immediate quantities. It is also dependent on how the machine represents flonums... NIL is who has $<. Sorry about that. This shows up the problem with creating an "efficient" function for handling a special case easily open coded, and then applying it to more than one type. I think NIL will need to have a $< function. I really don't think it is a good idea to depend on being able to represent fixnums and characters in an 'almost the same' manner. However, I don't hvae any objection to generic functions, and GREATERP and LESSP should work on characters. Of course, LISPM people use < and > as their generic functions with GREATERP and LESSP as alternate names for compatibility, so < and > should work on the LISPM (and will, of course, if LISPM continues representing characters as fixnums) I do think CHAR< should exist. It provides documentation in the code that something is a character, the possibility of type/range checking at run time if desired, guarenteed compatibility, all of which cannot be provided otherwise. It's cost is a few lines of code and documentation. I really can't see it as a threat to LISP's racial purity.  Date: 19 August 1980 13:33-EDT From: Robert W. Kerns To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Date: 18 AUG 1980 1916-EDT From: RMS at MIT-AI (Richard M. Stallman) To: LISP-FORUM at MIT-AI It seems strange to hear an objection to the use of < for comparing characters based on asking whether it should ignore the font. ... I'm sorry, that was not an objection. It was a question. It would indeed be a strange objection. I can't believe the suggestion that CHAR< ought to use the ASCII collating sequence on an EBCDIC machine is serious either. Clearly anyone writing a Lisp for such a machine will make CHAR< use the machine's own collating sequence: which is to say, he will make it be the same as comparing the fixnum equivalents of the characters, because that is what the machine's own collating sequence is. If you tried to tell him to make it do ASCII comparisons, he'd probably say "but on our system we don't use the ASCII collating sequence and I want comparing characters in Lisp to be compatible with comparing them in all the other languages on our system". Fine, don't believe it was serious if you want. Here your unstated assumption is that compatibility with the local environment is more important than compatibility with what the code did before you transported it. I can imagive a user complaining "but when I run it on the IBM machine, I get these funny things in the MIDDLE of my output rather than at the end like I did before!". I guess I'm just being an anti-IBM bigot and not considering the IBM local environment worth being compatible with. But this is supposed to be a character STANDARD! And ASCII is the standard character sequence.... What do IBM people do when they have to interface with the rest of the world, anyway? Does anybody on this list know whether they ever run programs that use ASCII with I/O conversion, or just what they do? What this implies is that a transportable program shouldn't make assumptions about the order of the collating sequence, regardless of whether the function is called CHAR< or <. ... True, if the collating sequence isn't standardized. I would like very much for it to be, but can accept that it may not be politically feasible. It's interesting that we're arguing political positions for users who don't exist yet. I'm not sure that these users will be IBM'er instead of being primarily people transporting programs developed in an ASCII environment. In general, a program using < is no more and no less transportable than it would be using the name CHAR< instead. This is obviously untrue if your character objects cannot be compared with the function <. If you have a microcoded machine where < is generic and you can do whatever you want, then fine, but if you will recall I've already covered this point: I have not been convinced that in all implementations a character object can be compared in the same manner as a fixnum. If you care to convince me that it CAN, and that it SHOULD, that still levaes the argument of coding style. By the same token, making use of the fact that a character has an equivalent as a fixnum doesn't imply nontransportability. It might be used to do something nontransportable, but that is not all it is good for. For example, it might be used as an array index, or be stored in a numeric array. It might be used to generate a hash code to index an array, if it has too many bits to use as an index as it stands. These things can be done without depending on the particular value of any character(s). Only arithmetic causes a problem, and then the same problem can occur operating on just the code part of the character. If it is bad to allow the entire character to be turned into a fixnum, it is just as bad to allow CHAR-CODE. It is true that someone might turn a character into a fixnum and then use LDB to extract the font. That would be a mistake, if there is a convenient function for doing just that. Similarly, it is a mistake to use PROG and GO to do something that is a trivial COND; but that is no reason not to have PROG and GO. Given that a character either IS a fixnum, or is a fixnum with a funny data type field, there should be no obstacle in the way of making use of this fact. I don't know of any arguments that you can't use CHAR-CODE and use the fixnum representation where apropriate. Certainly I haven't made such an argument. I know I missed CWH's #/ and TYI argument before, am I missing something or are you confused about my position?  Date: 19 August 1980 13:51-EDT From: Robert W. Kerns To: CWH at MIT-MC cc: LISP-FORUM at MIT-MC Date: 18 August 1980 09:22-EDT From: Carl W. Hoffman To: RWK cc: LISP-FORUM Date: 17 August 1980 23:10-EDT From: Robert W. Kerns Because when I look at code and see "<", I think "Aha, fixnums are being compared", and when I see "CHAR-<" I think "Aha, characters are being compared. While I agree with the objections you raised in your most recent message, I disagree with this one. (> NEXT-CHAR #/A) makes it fairly clear that characters are being compared, just as (> ITEM-COUNT ITEM-LIMIT) makes it clear that integral quantities are being dealt with. Granted this is an informal convention, but if type information is present in the operand names, it shouldn't be necessary in the operator names as well. What about (> (GET-NEXT-COMMAND COMMAND-STREAM) COMMAND-STREAM)? What about even bigger constructions with computed arguments? You can't encode ALL information in the called operator name! Having the type information in the operator name guarentees it will be easy to find. If all code were made of one-liners life would indeed be simple and bug-free but in real code the more information included, the greater the likelyhood of having the information where you want it.  Date: 20 AUG 1980 0437-EDT From: RMS at MIT-AI (Richard M. Stallman) I don't know of any arguments that you can't use CHAR-CODE and use the fixnum representation where apropriate. Certainly I haven't made such an argument. I was talking about turning the entire character, not just the code, into a fixnum. Someone claimed that this was not transportable, so I argued that it was no less transportable than turning just the code into a fixnum. Date: 9 October 1980 16:37-EDT From: Kent M. Pitman Re: Thoughts on Completing Readers I have some ideas about completing readers that I would like to get some input about. I would like to have a completing reader which interacted gracefully with a LispM style rubout handler. The problem is that the tokenizer needs to do the completion, but the prescan needs to see the chars coming in. Moreover, if a rubout happens later, the completion char (eg, altmode) should not be rescanned but the characters auto-completed should be. Hence, I think it might be useful to build in some general support to the readers which would allow smooth communication. Let me describe a scenario (using some code I have been working on for the Macsyma reader...) (DEFUN TOKENIZE-NORMAL-SYMBOL (STREAM) (DO ((C (TYIPEEK () STREAM #\EOF) (TYIPEEK () STREAM #\EOF)) (FLAG () T) (L () (CONS C L))) ((AND FLAG (NOT (OR (DIGITP C) (ALPHABETP C) (= C #/\)))) (NREVERSE L)) (IF (= (TYI STREAM) #/\) (SETQ C (TYI STREAM)) (SETQ C (CHAR-UPCASE C))))) Suppose that this reads a normal Macsyma symbol returning its characters backward. Now suppose we augment it as follows: (DEFUN TOKENIZE-NORMAL-SYMBOL (STREAM) (DO ((C (TYIPEEK () STREAM #\EOF) (TYIPEEK () STREAM #\EOF)) (FLAG () T) (L () (CONS C L))) ((AND FLAG (NOT (OR (DIGITP C) (ALPHABETP C) (= C #/\)))) (NREVERSE L)) (CASEQ (TYI STREAM) ((#/\) (SETQ C (TYI STREAM)) (SETQ C (CHAR-UPCASE C))) ((#\ATTEMPT-COMPLETION) (MAPC #'(LAMBDA (C) (FUNCALL STREAM ':UNTYI-COMPLETION C)) (COMPLETE-AS-FAR-AS-POSSIBLE (REVERSE L) THINGS-TO-COMPLETE-OVER))) ((#\QUERY-COMPLETIONS) (FUNCALL STREAM ':DISPLAY-POSSIBLE-COMPLETIONS (POSSIBLE-COMPLETIONS (REVERSE L) THINGS-TO-COMPLETE-OVER)))))) The problems are as follows: * The tty prescan needs to recognize that the characters #\ATTEMPT-COMPLETION and #\QUERY-COMPLETIONS are not to be `remembered' (so that if a rubout later causes rescanning, the completion isn't retried since the completed characters will already be on the stream). * Not all aspects of tokenizing should need to support completion. Eg, while tokenizing a string, I might want to disable completion. So I should be able to dynamically control whether completion is allowed. I feel that the easiest way to do this is to have the RUBOUT-HANDLER bind some special variable like COMPLETION-ALLOWED to (). When I do a TYI that allows completion, I will bind it to T. If it's (), the RUBOUT-HANDLER treats #\ATTEMPT-COMPLETION and #\QUERY-COMPLETIONS it like any other char, echoing it and passing it on. If it's T, then it passes the char to the scanner without remembering it (presumably the scanner will next receive UNTYI-COMPLETION requests back from the scanner -- if it receives no such requests, then it may wish to beep at the user if he's a tty rather than a file (a style issue we'll leave open); if it does get such requests, it will display (again, iff it's an interactive stream like a tty)). Upon rescan, those untyi'd chars should be indistinguishable in their buffer from other, really-typed-in chars and the completion-character should not be present. * There's some question about what the space is that it should complete over and how this should be communicated. Eg, for Macsyma, it will probably initially complete over the Macsyma defined symbols and not over user variables. Presumably it should be user-settable. It can be context sensitive, perhaps, in lists by having the list constructor actually bind a variable that made things after ('s complete over functions and things in the middle of lists complete over variables. Things like COND, etc. would confuse it. Maybe they could be handled, too. It's hard to say until we've tried it for a while what would be comfortable. I actually think that in the long run a distinguished character should be made available to the user which is called . Whether or some other character should work to show completions, I'm still not sure about -- but it would seem a reasonable application for that char. I can put this functionality into the rubout handler I have built for my Maclisp applications, but I'd like to get some input her on whether others have suggestions on technique, philosophy, etc. I have built a poor model of this which didn't do at all the right thing but did give me the feel of typing in Macsyma expressions with completion and it was indeed a pleasure given some of the long names hanging around. I'm hoping this will catch on. Comments/Questions? Would anyone be interested in making this or some equivalently useful functionality work in on the LispM? -kmp  Date: 1 September 1980 06:37-EDT From: Alan Bawden Re: /* ... */ Some languages have this neat feature where you can comment out pieces of text with some form of balanced frobs. Like in PL/1 you use "/*" and "*/" to open and close such a comment. I have often felt the need for such a feature in Lisp. Specially when I have to write large paragraphs of documentation and find that not all the text mode commands work properly when every line starts with a ";;;". The other method of commenting code (the COMMENT fexpr) requires you to make your english parsable by READ, and to place your comments in places that will be EVALed. (Also your comments end up in the UNFASL file in MacLisp.) I would like to propose that we adopt some kind of balanced commenting feature in Lisp. My initial suggestion would be that we use "#{" to open a comment and "}#" to close a comment. (It is a feature that it takes TWO characters to end a comment since it allows you to use braces with gay abandon within the comment.) The biggest problem here is that "#{" has a meaning in NIL. A feature such as this is useless unless we can get almost everyone to agree to at least tolerate it. Other suggested characters: #[ comment ]# ;does "#[" have a NIL meaning? #| comment |# ;not quite as nice looking I think. #; comment ;# ;bad idea. Things that think they understand ";" ;(EMACS, ZWEI, tty-prescan) will get confused. In any case, we should be sure and have the things nest properly. PL/1 doesn't do that correctly, and people are constantly bitching about it. Reactions?  Date: 2 Sep 1980 08:44:50-PDT From: CSVAX.jkf at Berkeley To: alan at mit-mc cc: lisp-forum at mit-mc Re: comment brackets As I see it we have gone full circle in our choice of a commenting method for high level languages. Initially we had the C in column one of Fortran. After you put that C in column one you could put anything you wanted in the rest of the card. The next stage, when we went to freer formated languages, was the balanced comments. And finally, in ADA we revert to a LISP style convention, with `--' taking the place of `;'. I think that it is great that some language designers have finally realized that LISP was right all along. I program mainly in lisp a C, which uses the PL/1 style /* ... */ and I much prefer the LISP style comment. It allows me to comment every line if I want to, you can't do this with /* ... */ , there is just too much baggage around the comment. Also, if you want to comment out a line or two of LISP, you can just put a `;' at the beginning of the lines to remove. But in C, you cannot surround the lines to comment out with /* ... */ if any of those line contain comments! And finally, the prefered way of formatting a multi-line comment in C is /* * this is the first line * and yet another line */ The reason for the added *'s is not that the language requires them, but to keep reminding you that this is a comment and not program text. Of course in LISP, using `;' this is always done. I can see one use for #| ... |# and that is to surround large blocks of text. I can imagine including parts of documentation files in the source code and dont want to see ugly `;'s in front of each line. Another case would be if you wrote a large section of documentation inline and now want to right and left justify it. When you send that block off to the text justifier you don't want any extraneous semicolons around. To summarize, I can see a very limited use for #| ... |# and if such a convention is adopted, I would ask that people only use it when absolutely necessary. - john foderaro  Date: 3 September 1980 1044-EDT (Wednesday) From: Guy.Steele at CMU-10A To: Eric J. Swenson cc: lisp-forum at MIT-MC Re: Suggestion for commenting brackets like /*...*/ In reply to your suggestion that simply /*...*/ be used as the commenting brackets: "It could be done... but it would be wrong." It would be fairly kludgy to do that and yet have "/" retain its normal meaning of quoting the next character. Also, every program in the world that parses LISP (like editors) would have to be changed. The advantage of #|...|# is that programs that parse MacLISP in a simple-minded way would continue to work for that syntax also, more or less.  Date: 1 October 1980 11:00-EDT From: Kent M. Pitman Sender: ___137 at MIT-MC To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: #| ... |# I think it is tremendously ugly and reduces the referential transparency of code since code might be commented out with that and it might not show up on the screen you are viewing. I like the fact that ;'s make themselves plain to users as comments and would be sad to lose the friendly ;;; 's at the head of comment lines. I think you should do something like #.(TYIPEEK #\FORM) ... in the (hopefully) few cases where you really need such a feature. I can probably be convinced to give in on this issue but I wish to be on record as not approving of it. I think we should save #| ... |# for later, more useful, applications. I talked with Jonathan Rees over the weekend on this issue and he concurred with me that ;'s were more readable in his opinion. -kmp  Date: 1 October 1980 1144-EDT (Wednesday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) cc: lisp-forum at MIT-MC Re: #| ... |# I agree that having ;;; at the front of each line is nicer for several reasons that using #| ... |#. Unfortunately, there are things you can do easily with #|...|# that you can't with ;;; because most editors won't support them. For example, if I have extensive paragraphed comments, I can't rejustify the paragraph in most editors without first removing the ;;;'s. Similarly, I can't grind commented-out (or intended-as-comment) LISP code. #|...|# trivially permits such operations. (Note that MIDAS has a similar facility, and it tends to be used sparingly; most MIDAS programmers prefer ;;; to COMMENT whenever feasible. But COMMENT has its uses.)  Date: 1 October 1980 12:13-EDT From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Actually, this just argues for getting better text-editors -- ones which can hack the visual distinction involved in structure hanging next to a line of semicolons. But for now I'll give in and hope people use #| ... |# with discretion as I still think it is apt to be confusing in some of the places it is most useful. -kmp  Date: 1 October 1980 12:23-EDT From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: KMP at MIT-MC, JAR at MIT-MC, LISP-FORUM at MIT-MC Re: #| One last comment on #| |# Btw, here's some food for thought ... (DEFUN NO-SHARP-VBAR? () (ERROR "Read End-Of-File while scanning for |#")) Try commenting that one out. Maybe you want comments to have an input syntax which allows slashification ... Requiring the user to / certain configurations in his comments. Sigh. Also, if you think I shoulda done "Read ... /|#" anyway, maybe you'll find (DEFUN NO-SHARP-VBAR? () ; Signals lack of |# (ERROR "Read End-Of-File while scanning for /|#")) more amusing to try to #| ... |# out. Then of course there's also #| ... #| ... |# ... |# when you didn't realize someone had already #|...|#'d out some of the stuff in the middle. -kmp  Date: 1 October 1980 12:44-EDT From: Richard Mark Soley To: KMP at MIT-MC cc: JAR at MIT-MC, LISP-FORUM at MIT-MC, Guy.Steele at CMU-10A Re: #| One last comment on #| |# But #| ... #| ... |# ... |# should work just fine! There's no reason we have to follow PL/1's messed-up /* /* */ */ kind of commenting; #|'s should stack up nicely. -richard  Date: 1 October 1980 1244-EDT (Wednesday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) cc: lisp-forum at MIT-MC Re: #|...|# Of course, I'm arguing for better text editors! But I also think that for now #|...|# is an acceptable patch for the masses who won't have such editors for the nexct three years.  Date: 1 October 1980 1301-EDT (Wednesday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) cc: lisp-forum at MIT-MC Re: #| One last comment on #| |# Nothing's perfect. You can't get a carriage-return into a ;-comment either. And what of the loser who adds a ; to the second line of (COND ((SCREWYP FROB) (FORMAT T "This frob is extremely ~ screwy: ~S" FROB))) to produce (COND ((SCREWYP FROB) ; (FORMAT T "This frob is extremely ~ screwy: ~S" FROB) ) because he thought the in the string "didn't count"? The answer is that every string-gobbling or commenting device has to be terminated by *something*, and you just can't get that something into such a construct. So indeed, when you put #|...|# around something you have to look for |# first. It doesn't work to put a string in a string either.  Date: 1 October 1980 14:15-EDT From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: #| One last comment on #| |# Wait a minute. I thought the whole idea was that #| |# constructs would nest properly. That is their most important feature; you can use them to comment out code without worrying about the screw that there might already be commented out things. It seems that if they are to nest right, they will have to be implemented in such a way that none of the things KMP mentions will fail. Alan, would you please explain? Modulo the above, I am in favor of putting this in. Kent, I would not use them to replace the ";;;" comments before code; I would only use them to comment out code, and possibly for moby big block comments.  Date: 1 October 1980 15:25-EDT From: Kent M. Pitman To: DLW at MIT-AI, SOLEY at MIT-AI cc: LISP-FORUM at MIT-AI, RMS at MIT-AI Re: Nesting I don't think that if you nest #|...|#'s you will get the support from Teco which you were originally seeking. #|...#|...|#...|# followed by a C-M-Rubout will rub back only to here ^ or maybe a bit before if you have sticky (' syntax) characters preceding the 3rd vbar. C-M-F and C-M-B will fail similarly. If you want to accomplish nesting, you need a pair of matchfix characters (not character configurations) to make Teco happy --- or the Teco Maintainers' support for making a radical change to Teco along these lines. -kmp  Date: 1 October 1980 16:26-EDT From: Alan Bawden To: LISP-FORUM at MIT-MC cc: BUG-LISP at MIT-MC, BUG-LISPM at MIT-MC Re: #| ... |# 1) Yes, I prefer ";" as the way to write a comment too. But there are things that you cannot do with it that you can do with "#|".  Date: 1 Oct 1980 1632-EDT From: Dave Andre To: ALAN at MIT-MC cc: lisp-forum at MIT-MC Re: #| ... |# = /* ... */ I vote no. If the #| ... |# is used for long comments, it will probably take some variant of the folloing form: #| Mumble... | ... | ... |# for readability, just as /* ... */ has in other languages. Thus the original argument about not having some "comment character" at the beginning of the line loses its meaning. What we do need is to make editors smarter about ;;;'s so text comments can be manipulated more easily. i. e., it would be nice if Meta-Q worked.  Date: 1 October 1980 17:03-EDT From: Alan Bawden To: LISP-FORUM at MIT-MC cc: BUG-LISP at MIT-MC, BUG-LISPM at MIT-MC Re: #| ... |# (SHIT! This damned keyboard just doubled a ^C and sent you all some more trash to delete! Most of you get TWO copies!) 1) Yes, I prefer ";" as the way to write a comment too. But there are things that you cannot do with it that you can do with "#|". If MIDAS programmers prefer the ";" comment and tend to use it when they can, then I suspect the same thing will remain true of Lisp programmers. 2) Yes, It is intended that the things should nest. I agree that editors won't be able to deal with this correctly, but that is a poor reason to screw ourselves. People who use "/*" in PL1 have complained bitterly for years that the damn things don't nest correctly. 3) About slashification. I didn't think it was necessary since you can always comment out a "|#" by typing "|/#" instead, but "/|#" is better parsed by editors. So OK, slashification is now part of the definition. 4) Anyone who uses #| | |# has just thrown away all the advantages of "#|". He might as well be typing ";;;" at the front of every line. Editors will also get confused by all the "|"s. For these reasons I expect that this will not be done. (BTW, M-Q does understand ";;;" if you set the fill prefix, but that is not always enough.) 5) How come nobody voted no the first time I brought this up?  Date: 1 October 1980 17:06-EDT From: Glenn S. Burke To: dlw at MIT-AI, KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: #| One last comment on #| |# One can always slashify the stand-alone #| or |#; eg, (error "Read ... |/#"). This will presumably cause #| to not recognize it, so you can put it inside arbitrary numbers of #| ... |# constructions.  Date: 1 OCT 1980 2113-EDT From: RMS at MIT-AI (Richard M. Stallman) To: ALAN at MIT-AI cc: LISP-FORUM at MIT-AI I think that #| ... |# is a bad idea because it is essentially useless and we have too much syntax already. Far too much.  Date: 3 OCT 1980 0429-EDT From: RMS at MIT-AI (Richard M. Stallman) Re: #| It would be better to implement commands for grinding commented-out code than to add syntax to Lisp. Perhaps Tab and C-M-Q should know about the Fill Prefix. 6Date: 6 December 1980 15:26-EST From: George J. Carrette Re: (progn 'compile ...) Do we really need this? I don't really think it is a documented feature of lisp compilers that toplevel PROGN's are flattened if and only if the first form in the PROGN is (QUOTE COMPILE). I think this non-uniformity is a bit silly, at this point, if somebody wants to "protect" their DEFUN's from being compiled, they should say (EVAL '(DEFUN ...)). (Although, chuckle, at this time the maclisp compiler manages to output NOTHING when it sees (EVAL '(S-EXP)), an obvious bug). If all PROGN's where flattened, would anybody notice? I doubt it very much. Q: Who invented (PROGN 'COMPILE ...) and what were the reasons? -gjc  Date: 7 December 1980 16:11-EST From: Earl A. Killian Re: (progn 'compile ...) I've also always wondered about this lossage. It seems to me that top-level forms ought to be considered as forms in a setup function that gets compiled and that is called automatically at load time. Is there something wrong with this? Does it have undesirable semantics, or is it just implementation reasons?  Date: 8 December 1980 02:07-EST From: Robert W. Kerns To: EAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: (progn 'compile ...) Date: 7 December 1980 16:11-EST From: Earl A. Killian Subject: (progn 'compile ...) To: LISP-FORUM at MIT-MC I've also always wondered about this lossage. It seems to me that top-level forms ought to be considered as forms in a setup function that gets compiled and that is called automatically at load time. Is there something wrong with this? Does it have undesirable semantics, or is it just implementation reasons? Well, the problem with this in MacLisp is that compiled functions cannot be garbage collected. Thus these 'setup functions' would remain forever. On the PDP-10 this would be rather fattening.  Date: 8 December 1980 15:13-EST From: Kent M. Pitman Re: PROGN 'COMPILE I agree that toplevel function calls don't want to get compiled, but there's no reason not to flatten PROGN's. People could always pick constructs like: PROG1 to put their forms in if they didn't want them to compile. Eg, (PROG1 'TEMP-DEFINITIONS (DEFUN F (X) X) (DEFUN G (X) X) ...) (F ...) (PROG1 'REMOVE-TEMP-DEFINITIONS (FMAKUNBOUND 'F) (FMAKUNBOUND 'G) ...) which is rather flavorful because it even gives you an obvious place to put a comment about the stuff in the form. Even if PROG1 wanted to flatten out, which I don't see any reason for, you could always use (EVAL '...) or could bootstrap your way into a function that could help you out. eg, (EVAL '(DEFUN PROGN-DON/'T-COMPILE X NIL)) (PROGN-DON/'T-COMPILE (DEFUN F (X) X) (DEFUN G (X) X) ...) which presumably would not get compiled anyway, since it's a random function reference. Then later you could do (FMAKUNBOUND 'PROGN-DON/'T-COMPILE) and the world would be back to normal. Given the ease with which it is possible to trick the compiler into not compiling something, I don't understand why (PROGN 'COMPILE ...) was ever invented either. -kmp  Date: 8 December 1980 15:46-EST From: George J. Carrette To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: PROGN 'COMPILE Kent, if DEFUN were implemented as (defmacro defun (name args &rest body) `(setf #',name #'(lambda ,args ,@body))) then even random function calls (FOO-BAR (DEFUN X () ...) (DEFUN Y () ...)) at top-level would have the "flattening out" effect that PROGN 'COMPILE does; in fact, there would be no need to have PROGN 'COMPILE since all forms would be treated uniformly. In that situation the only reasonable form for "protecting" yourself from the compiler (why would you want to is another question), is (EVAL '(PROGN ...)). So, I would recomend removing this PROGN 'COMPILE hack, on the grounds that it doesn't reflect a coherent design philosphy, and the semantics are somewhat dependant on the implementation of DEFUN. -gjc p.s. I think it almost goes without saying that the places which depend on protecting themeselves from the compiler are very few, and should be well marked. eDate: 9 December 1980 14:48-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: KMP at MIT-MC, LISP-FORUM at MIT-MC It appears that Multics Maclisp is doing the documented thing. The Lisp Machine is doing the right thing except that EQUAL is being used where EQ should be used. Amusingly, the function NSUBST (the "destructive" version of SUBST in the Lisp Machine) uses EQ! The Lisp Machine system software uses SUBST in seven places. Six of them (once in DISPLACE, twice in the microassembler, once in ZWEI, once in DEFSTRUCT, and once in the compiler) are (SUBST NIL NIL ...). The seventh is an ancient macro in the console diagnostic program, which uses a symbol as the second argument. There are also all the DEFSUBSTs, which use a symbol as one argument. Therefore, changing this will not affect any of the installed LispM software. However, it might break user software. In particular, I am worried about Macsyma. I would like to see the Lispm changed to use EQ as documented, but I would like to hear whether Macsyma will break, and to hear what the other LispM system people say.  Date: 23 December 1980 13:22-EST From: Jon L White To: CWH at MIT-MC, LISP-FORUM at MIT-MC cc: MACSYMA-I at MIT-MC Re: COPY ? Date: 19 December 1980 23:05-EST From: Carl W. Hoffman . . . My objection to COPY is that its ambiguous. People have used that name in the past for several different meanings. For that reason, I introduced the names COPY-CONS, COPY-TOP-LEVEL, and COPY-ALL-LEVELS into Macsyma. These operations just aren't preformed frequently enough to warrant shorter names. Of course, we'll go along with what the Lispm/NIL people choose ... Even in system code, I'm quite certain that all three version of "copy"ing are needed; for example, the DEFMAX macro cache needs to call COPY-CONS (which I presume should be an error if applied to some structured thing other than a pair, such as an s-expression array or a vector). On the otherhand, I suggest something slightly more general than COPY-CONS so that one could get a new copy of an array/vector without having any of its elements copied; say, COPY1. At the very least, I'd suggest the name COPY-PAIR, with COPY-CONS a synonym thereof, but I believe COPY1 would be adequate. If COPY-TOP-LEVEL could be called COPY-SEQUENCE, then its definition would be quite straightforward -- just COPY1'ing all the successive tails of a list, and just COPY1 itself on an array/vector. I should like to see the name COPY-TOP-LEVEL be defined as doing a COPY1 on each element of a sequence. Then COPY* could be a two-arg function whose optional second arg is a "count" of maximum depth to descend while copying -- some large integer would cause it to become effectively COPY-ALL-LEVELS, whereas a modest integer would allow one to partially copy circular structures. Name Action on PAIR "x" Action on VECTOR "x" -----------------|---------------------------|--------------------------------- COPY1 | (cons (car x) (cdr x)) | (vector-fill (make-vector | | (vector-length x)) | | x) COPY-SEQUENCE | (mapcar #'COPY1 x) | (mapf VECTOR VECTOR #'COPY1 x) COPY* | (subst () () x) | (mapf VECTOR VECTOR #'COPY* x)  Date: 23 December 1980 13:33-EST From: Jon L White To: CWH at MIT-MC, LISP-FORUM at MIT-MC cc: MACSYMA-I at MIT-MC, COPY-TOP-LEVEL at MIT-MC Re: COPY-TOP-LEVEL BMT notes that in my previous note, COPY-SEQUENCE is not the same as the notion of COPY-TOP-LEVEL -- the latter would just be (MAPCAR #'(LAMBDA (X) X) ) on lists, and is the same as COPY1 on other sequences.  Date: 23 December 1980 13:59-EST From: Barry M. Trager To: JONL at MIT-MC cc: CWH at MIT-MC, LISP-FORUM at MIT-MC, MACSYMA-I at MIT-MC, COPY-TOP-LEVEL at MIT-MC Re: COPY-TOP-LEVEL I propose that copy-sequence act in an analogous way on lists and vectors. In JONL's notation copy-sequence should do copy1 on a vector, and (append x nil) on a list.  Date: 01/18/81 20:41:43 From: RMS at MIT-MC In System 55.12, ZMail 10.0, microcode 715, on Lisp Machine Four: Is COPYLIST* the replacement for (SUBST NIL NIL ...)? When I came across the function, my first thought was that it was to COPYLIST as LIST* is to LIST. But such a function should be unnecessary, since COPYLIST ought to be able to handle dotted lists, so I figured it must do something else. I can guess that it might be the SUBST, because I remember that people were talking about defining a function to do that. The name itself doesn't suggest it. Wouldn't COPYTREE be better?  Date: 19 January 1981 21:18-EST From: Kent M. Pitman To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: COPYTREE That's a very nice name. Makes the functionality pretty transparent. COPYLIST* is rather ambiguous...  Date: 20 January 1981 01:19-EST From: Daniel L. Weinreb To: LISP-FORUM at MIT-AI cc: ALAN at MIT-MC Re: COPYALL, COPYLIST, et alia We seem to be in disagreement about which name to use for the (subst nil nil ...) function. The votes so far show that RMS votes for COPYTREE, KMP seems to agree with this, Alan likes COPYALL, Moon likes COPYLISTS, and CWH likes COPY-ALL-LEVELS. For everyone's information, COPYLIST* is like COPYLIST except for the CDR-coding. COPLIST* creates a list whose last cons is a full-node, so that you can RPLACD it without creating invisible pointers. The Lisp Machine also has a function called COPYALIST, which copies the top two levels of strucure, for association lists. It would seem to be more consistent with the naming system that we are currently using NOT to use a hyphenated name. None of our little list utilities have hyphens; whether or not this would have been a good design decision from the start is academic. At this point I consider it more obvious and aesthetic to be consistent. So I don't want to use COPY-ALL-LEVELS. Also, copying all the conses has nothing to do with lists. Lists are a higher-level abstraction built on conses; this function does not use that concept at all. So I don't want to use COPYLISTS. This leaves COPYALL and COPYTREE, with one and two votes respectively. Neither seems particularly better than the other. Anyone have further useful contributions to the discussion, or should we just pick one randomly? Let's not waste too much effort on this.  Date: 20 January 1981 0955-EST (Tuesday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: lisp-forum at MIT-MC Re: COPYALL, COPYLIST, et alia I vote for COPYTREE.  Date: Tuesday, 20 January 1981 10:04-EST From: Chris Ryland Re: COPYALL, COPYLIST, et alia Date: Tuesday, 20 January 1981 01:19-EST From: Daniel L. Weinreb ... This leaves COPYALL and COPYTREE, with one and two votes respectively. Neither seems particularly better than the other. Anyone have further useful contributions to the discussion, or should we just pick one randomly? Let's not waste too much effort on this. By your argumentation, COPYTREE uses "tree" which is certainly a higher-level concept than conses, so COPYALL may be best.  Date: 20 Jan 1981 0912-EST From: Dave Andre To: dlw at MIT-AI cc: DLA at MIT-EECS, LISP-FORUM at MIT-AI Re: COPYALL, COPYLIST, et alia I vote COPYTREE.  Date: 20 January 1981 12:42-EST From: George J. Carrette To: CPR at MIT-EECS cc: LISP-FORUM at MIT-AI Re: COPYALL, COPYLIST, et alia Parameterized macros to the rescue. ((COPY CONS) X) => (CONS (CAR X) (CDR X)) ((COPY CONS RIGHT-RECURSIVE) X) => (APPEND X NIL) ; almost! ((COPY CONS RECURSIVE) X) => (SUBST NIL NIL X) ; maybe ; does it stop at a non-cons, or at ATOM, or what? The COPY-ALIST was a rigth-recursive copy with second-level of ((COPY CONS) ...) If we really want good names it sounds like a time to go back to the basic generalizations of tree-recursion, and come up with a compact naming convention for them. Otherwise, we may be stuck with: COPY-RECURSIVE-CONS. I doubt these copy functions are used very often. Mostly they are used when funny RPLACing is going on in other places, so having larger names might be a feature. The more expensive functions should have larger names shouldn't the? (uhm...)  Date: 20 January 1981 16:03-EST From: David A. Moon Re: COPYALL, COPYLIST, et alia In the Lisp machine at least, these copy operations sometimes need to copy uninterned symbols, strings, and bignums. I suspect something with multiple arguments is called for; the thing to copy and keywords specifying options. Would COPY be a poor choice of name, i.e. are users likely to be using it already? If so I guess I suggest (with trepidation) that COPYLIST be changed to do this and COPYALIST and COPYLIST* be phased out.  Date: 20 January 1981 16:25-EST From: George J. Carrette To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Re: COPYALL, COPYLIST, et alia I vote for a keyword-argument COPY. (There are a couple calls to a macro named COPY in macsyma which can be dealt with. Personally I've never used anything called COPY)  Date: 20 January 1981 1659-EST (Tuesday) From: Guy.Steele at CMU-10A Re: COPYALL, COPYLIST, et alia Another, more general copy routine would take a predicate and a thing, and copy the thing recursively iff the predicate was true of the object. (Entities should probably be copied by sending a :COPY message.) Thus: xxxx Let me go further. Let the predicate be a pseudo-predicate determining when *not* to copy. Moreover, if the predicate returns non-() then the value of the predicate is returned as the value of this new function (call it COP, by analogy with ASS and MEM and DEL). This gives the "predicate" an opportunity to do specialized copying itself. If the object is (), then there is no good way to say not to copy it (because to do that you would have to return () -- but that means to copy the argument!); but copying () never hurts anyway, right? SO... (DEFUN COP (PRED X) (OR (FUNCALL PRED X) (TYPE-DISPATCH X ((FIXNUM) X) ((STRING) (COPY-STRING X)) ((LIST) (CONS (COP PRED (CAR X)) (COP PRED (CDR X)))) ...))) Then: (COPYTREE X) = (COP #'(LAMBDA (Z) (AND (ATOM Z) Z)) X) (COPYLIST X) = (COP #'(LAMBDA (Z) (IF (PAIRP Z) (CONS (CAR Z) (COPYLIST (CDR Z))) Z)) X) ;kind of a sledgehammer approach (COPYKLUDGE X) = (COP #'(LAMBDA (Z) (COND ((STRINGP Z) (STRING-UPCASE Z)) ((ATOM Z) Z))) X) ;this copies an entire tree and also converts strings to upper case Actually, I think this is all bletcherous, but maybe someone can improve this half-baked idea. --Guy  Date: 20 January 1981 17:34-EST From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: (COP pred form) That's a cute idea. Your example ought perhaps be called MAPTREE though, since it really is far more general than copying (and far less constrained). Re: Your query about better semantics... How about making pred have to return T if it wanted the object copied, and then using a standard system method for copying that object (a la typecaseq or more general method or flavors or whatever) ... eg (DEFUN COP (PRED X) (IF (NOT (FUNCALL PRED X)) X (TYPE-CASEQ X ((CONS PAIR LIST) (CONS (COP PRED (CAR X)) (COP PRED (CDR X)))) ((STRING) (FORMAT NIL "~A" X)) ... etc ... (OTHERWISE X) ; Should this be an error? ))) (Please assume the obvious meaning/spelling for TYPE-CASEQ if it doesn't exist or runs under some other name.) The only sad thing is that PRED will almost certainly have the form #'(LAMBDA (X) (TYPE-CASEQ X ((....) T) (OTHERWISE NIL))) which means that the case dispatch will be done twice. Maybe a special form like COP-SURROGATE could be used as follows: (COP-SURROGATE types form) where types was an unevaluated list of types to copy. So (COP-SURROGATE (CONS STRING) form) would turn into (using the obsolete Maclisp LABEL semantics): ((LABEL COP-SURROGATE-G0001 (X) (TYPE-CASEQ X ((CONS) (CONS (FUNCALL COP-SURROGATE-G0001 (CAR X)) (FUNCALL COP-SURROGATE-G0001 (CDR X)))) ((STRING) (FORMAT NIL "~A" X)) (T X))) form) Note, type dispatch might not be the only kind of thing you want. eg, I might want to copy all lists, but only hunks with a (CXR 0 h) of G0002 or something. In that case, COP-SURROGATE wouldn't be strong enough and I'd need the more general COP. -kmp ps I suppose COP is symmetric with ASS, MEM, etc. so that's an ok name if anyone cares to put it in. COP-SURROGATE is probably not worth putting in under any name (certainly not that name!), but I present it only to illustrate something I expect to be true of the use of COP.  Date: 20 January 1981 19:33-EST From: Daniel L. Weinreb To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC Re: COPYALL, COPYLIST, et alia While this is a neat idea, it does not make it particularly easy for you to just copy a list or a tree of conses. I think the goal here is to make maximally easy the things that you most often want to do. For much rarer things, it just isn't that hard to write your own Lisp function. There is a tradeoff between providing things as functions, and letting pepole write them by themelves. I think it is only worth providing something and putting it in the manual if it is something likely enough to be useful and easy enough to use that it is worth the time of the user to remember that the function exists. If it is too obscure or special-purpose, he will not remember that it is around, even when he needs it.  Date: 20 JAN 1981 2005-EST From: DLW at MIT-AI (Daniel L. Weinreb) Re: COPYTREE After some discussion, I have come to the opinion that whether or not we put in various hairy general-purpose customizable extensible copying functions, it is clearly useful have a simple function to do what (subst nil nil ...) does. People use (subst nil nil ...) a lot; there should be a SIMPLE, EASY thing to replace this nasty idiomatic in-joke. There should be a function that you can just call, and get the commonly-wanted thing to happen. I continue to push for installation of COPYTREE. (Something that copies uninterned symbols and bignums is presumably useful mostly for people who understand a little about storage management and are hacking areas or something; this is clearly the sort of thing that you'd only expect experts to be using, and it deserves keywords, features, and a longer name.)  Date: 20 Jan 1981 2049-EST From: Dave Andre To: LISP-FORUM at MIT-AI cc: DLA at MIT-EECS Re: COPYTREE For practically all copying purposes which I can think of, (COPYTREE tree &optional max-n-levels) suffices.  Date: 20 January 1981 23:38-EST From: Kent M. Pitman To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Uh, how about COPYCODE to copy bignums, uninterned symbols, etc. along with the list structure. Am I correct that you what to be able to do (COPYsomethingorother '(LAMBDA (G0001) (IF (ZEROP (- G0001 12345123451234512345)) G0001 (RPLACHAR "foo" 0 (TO-CHARACTER (FORMAT NIL "~D" G0001)))))) and have the various contained constants copied? If so, COPYCODE seems like maybe what you want to call it. That would leave COPYTREE open for DLW's suggestion, and COPY open to some generic copier yet to be discussed.  Date: 21 JAN 1981 0624-EST From: ACW at MIT-AI (Allan C. Wechsler) To: DLW at MIT-AI cc: LISP-FORUM at MIT-AI Re: COPYALL, COPYLIST, et alia Make it N+1 votes for COPYTREE. COPYALL is ugly. ---Wechsler  Date: 21 January 1981 1244-EST (Wednesday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: lisp-forum at MIT-MC Re: COPYALL, COPYLIST, et alia I wholeheartedly concur with DLW's remarks. I do *not* think that COP in its present form is worth even considering adding to the language. I started out to generalize something, and went off the deep end. I sent the message out anyway hoping to inspire someone to do better. (I said as much in the message, but I want to repeat the point here.)  Date: 21 January 1981 1250-EST (Wednesday) From: Guy.Steele at CMU-10A To: DLW at MIT-AI (Daniel L. Weinreb) cc: lisp-forum at MIT-MC Re: COPYTREE Yes, let's put in COPYTREE. (I would have suggested calling it COPYSEXP, but that looks too much like a test for an imitative transsexual or something.)  Date: 21 JAN 1981 1643-EST From: RMS at MIT-AI (Richard M. Stallman) Wouldn't COP-SURROGATE be likely to be arrested? I thought it was against the law to impersonate an officer.  Date: 21 January 1981 17:06-EST From: Carl W. Hoffman To: DLA at MIT-EECS cc: LISP-FORUM at MIT-AI Re: COPYTREE Date: 20 Jan 1981 2049-EST From: Dave Andre For practically all copying purposes which I can think of, (COPYTREE tree &optional max-n-levels) suffices. Do you want levels of conses or levels of lists?  Date: 21 Jan 1981 1725-EST From: Dave Andre To: KMP at MIT-MC cc: DLA at MIT-EECS, lisp-forum at MIT-MC Using the functions in my reply to CWH, (COPYLIST 1) would copy only the top level structure, (COPYLIST 2) would be the same as the lisp machine function COPYALIST. I thought that was a reasonable generalization...  Date: 21 January 1981 18:11-EST From: Alan Bawden To: DLA at MIT-EECS cc: LISP-FORUM at MIT-MC Date: 21 Jan 1981 1725-EST From: Dave Andre ... (COPYLIST 2) would be the same as the lisp machine function COPYALIST. I thought that was a reasonable generalization... Well, This isn't actually a generalization of anything. If you stop and think about it you will realize that your interpretation of "level" has been colored by your interpretation of what the list structure represents. You think of this as a "list" of "conses", and thus a 2 level structure, but it might also be thought of as a "list" of "lists" and different things would be copied.  Date: 21 January 1981 17:43-EST From: Daniel L. Weinreb To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC As I understand Dave's message, he is suggesting a storage copier that can do any of several things; it would have so many options that it would be approrpiate to use keyword arguments. Presumably such a thing is primarily useful for people who understand about such internal hairy issues as areas and the internal representations of objects (i.e. cdr-coding). This is a function for experts; it can validly be rather complex, with options and modes relating to all sorts of internal things. I think it would suffice to have some functions for basic users that do things sensible merely in terms of Lisp objects, such as copylist, copytree and copyalist, and then just use this one moby function for al iry internal-systm uses. copylist* is somewhere in between these, since it doesn't make too much sense if you don't grok cdr-coding. I am not sure whether it deserves to exist with a short name like that. Certainly the moby function should subsume all the others. More votes have come in for plain-old COPYTREE. If there are no objections I will install this in the Lisp Machine. IDate: 13 March 1981 18:39-EST From: JONL at MIT-MC, RWK at MIT-MC Sender: JONL at MIT-MC To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC Re: COPYTREE So what was decided about the range of effective copying of this name (COPYTREE) -- that is, what data types will be "descended"? Only PAIRs, or any others also?  Date: 14 March 1981 03:12-EST From: Daniel L. Weinreb To: JONL at MIT-MC, RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: COPYTREE The Lisp Machine implementation of COPYTREE only descends through CONSes. I am not completely sure what you mean by PAIRs (whether they might include things other than CONSes), and what should happen in Lisps that support HUNKs. But I think that the idea is that it is like SUBST in this respect.  Date: 14 March 1981 07:11-EST From: Jon L White To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC Re: Data-types for "descending" during copying Just got your note about the meaning of COPYTREE, and the question about "PAIR". First let me explain that I wanted to CC this reply to the LISP-DISCUSSION people, since the extended "copy" question reveals quite a bit about the semantics of new data types in LISP, and it might be a good thing to think of the inter-dialectic differences. (since both LISP-FORUM and LISP-DISCUSSION are mailing lists here at MIT-MC, I hope no one gets two copies of this note; if you do, please send me notice of this fact, and maybe someone here can do something about it) Re the name PAIR, it is fully equivalent to a "CONS" -- the name PAIR has been used by the StandardLISP people for a long time, and the NIL people seem to favor it too (since NIL and MUDDLE distinguish between the nullist and other "pairs", the "LIST" data type must be a union of "PAIR" and "NULL"). So if COPYTREE is to be the name for the pair-only descending copier, then that is fine. Still, we must address the next level of copying, namely something which descends the normal "hunk"-type data -- for StandardLISP and NIL this would include VECTOR's, Maclisp would include HUNK's, and for just about all lisps it would include S-expression (or ART-Q) ARRAYs. (Please, please, don't anyone else *ever* implement HUNKs, let MacLISP rest in peace, if you must do it, then give them the generality of VECTORs). Although one could argue that SYMBOLs have "components", and would thus be descendible, I believe there is some reasonable stopping point for this level of "copyist", e.g., SYMBOLs, numbers of all kinds, non-S-expression arrays, etc. The other important extension for copyings is to "instances" of classes-of-objects (yes, there seems to be a lot of fast-changing terminology these days, what with flavours, classes, smalltalk, etc). I believe the generic notion for this data type is caught by the NIL "EXTEND", which is more-or-less the admixture of ECL's TYPES with the capabilities for any of the emerging message-passing protocol's. EXTENDs have been "piggy-backed" onto MacLISP in order to get a modest class system, and a bunch of new data types; and here we chose to have SUBST send a "SUBST" message. If a new function is admitted, with a slightly broader authorization for copying than COPYTREE, then no new messages need be introduced (i.e., we don't need a :COPY-SELF message), since the implementation of such a new function could utilise the SUBST message. Isn't it true that SUBST is more primitive, in that "copy-self" can be reduced to the "(subst () () x)" hac, but SUBST cannot be reduced to any mere straightforward usage of :COPY-SELF ?  Date: 17 Mar 1981 11:08 PST From: Masinter at PARC-MAXC To: JONL at MIT-MC (Jon L White) cc: DLW at MIT-MC, LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC Re: Data-types for "descending" during copying Interlisp has three copy operations: COPY - copies tree down to NLISTPs. COPYALL -- attempts to copy ALL non-LITATOM/SMALLP structures, including arrays, strings (new string, same chars), user defined data types, hash arrays (copies both keys and values), readtables, bignums, etc. It fails (I believe by causing an error) if it sees a pointer into the middle of an array, or other illegal Lisp pointer. HCOPYALL -- similar to COPYALL, but attempts to preserve circular data structures. Requires internal storage proportional to the size of the data structure being copied (it maintains a hash table of visited nodes). Date: 16 Oct 1980 2013-EDT From: JoSH Re: from a Christian Science Monitor article on software "Intel, a California semiconductor company, is attempting to put software on its integrated circuits . . . And the Massachusetts Institute of Technology has developed a new language, named LISP, which needs fewer symbols than COBOL or FORTRAN to give computers the same instructions."  Date: 16 October 1980 20:34-EDT From: Robert W. Kerns To: JoSH at RUTGERS cc: LISP-FORUM at MIT-MC Re: from a Christian Science Monitor article on software "NEW LANGUAGE"????? FOR CHRISAKES, IT'S OLDER THAN COBOL! YOU'D EXPECT THEM TO GET THE REASONS WRONG, BUT....  Date: 16 Oct 1980 2202-EDT From: JWALKER at BBNA To: JoSH at RUTGERS cc: Lisp-Forum at MC Re: from a Christian Science Monitor article on software Now everyone can enjoy what happens when real-world marketeers join hands in Symbolics and try to figure out how to sell LISP to the unbelieving business programming community. Stay tuned for more enjoyable phrases like the one from the Boston Globe article on Symbolics "LISP is also easier to understand because it looks like English prose".  Date: 17 October 1980 12:18 edt From: HGBaker.Symbolics at MIT-Multics To: lisp-forum at MIT-MC, jwalker at BBNA Re: Globe Article on Lisp I apologize for Symbolics about the Lisp is like English remark. As is usual, this remark was taken out of context. Lisp is like English in that it has trivial syntax, (lexically), and that it is infinitely extensible in functions and objects without going outside that syntax. It share these properties with few other computer languages (e.g. Forth). Under no conditions was it ever meant that Lisp "looks like" English, or that it is a subset of English. --end of apology  Date: 17 Oct 1980 1505-EDT From: Dave Andre Re: CSM article on software Which issue of the Christian Science Monitor has this article in it? I would really like to read the whole thing (I love a good laugh). -- Dave  Date: 17 OCT 1980 1435-EDT From: BAK at MIT-AI (William A. Kornfeld) To: HBaker.Symbolics at MIT-MULTICS cc: LISP-FORUM at MIT-AI (Where ( ((did {you} get) (the (idea (that (English has (a (trivial syntax)))))))))? /|\ | | |  Date: 19 October 1980 17:12-EDT From: George J. Carrette To: JWALKER at BBNA cc: LISP-FORUM at MIT-MC, JoSH at RUTGERS Re: from a Christian Science Monitor article on software Hey you guys, Lisp can be made to look like English prose. (LOOP unto others as they loop unto you) 9Date: 25 October 1980 08:39-EDT From: Jon L White Re: Twenex CURSORPOS On XX right now, the latest MacLISP .EXE file runs just as if it were on an ITS. Rubouts work Control-L clears the screen Wraps-around rather than scroll CURSORPOS works!!! (P.S. this winning CURSORPOS comes as a result of Mike Traver's Virtual Terminal System, which should in theory be transportable to most other twenex sites. MT will shortly be leaving MIT and going to work for Foonly Inc., so we hope it all works!) Date: 5 OCT 1980 1748-EDT From: BAK at MIT-AI (William A. Kornfeld) Re: Function invocation by key word Since the subject of "parsing" argument vectors has come up, I'd like to issue an advertisement for DEFUN-KEYED which allows you to define functions that takes arguments by key rather than by position in the argument vector. In many circumstances it leads to more readable code and solves the &optional argument screw where it is impossible to specify &optional argument n without specifying &optional arguments 1 thru n-1. DEFUN-KEYED runs only on the Lisp machine (for no better reason than that I have lost the perseverence necessary to get it to work with the PDP-10 compiler). It is implemented with macros in such a way that its innards are effectively invisible to the user and it entails no loss of efficiency other than at macro expansion time. It has been in use for almost a year. A description accompanies the code in AI:BAK;DEFKEY >.  Date: 6 October 1980 12:25-EDT From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: BAK at MIT-AI, LISP-FORUM at MIT-AI Re: Function invocation by key word Your defun-keyed looks like a good thing. It's too bad you didn't write it several years ago, before we started putting keyword-argument function in the Lisp Machine. If you look at the current Lisp Machine software system, there are a lot of functions that are called by keyword, using the simpler mechanism of simply having a &rest argument and having the function sort out the arguments. An example in the manual (from the Archaic Window System -- this function is highly obsolete) is tv-define-pc-ppr. The example is: (setq foo (tv-define-pc-ppr "foo" (list fonts:tvfont) ':top 300 ':bottom 400)) So the keyword names are evaluated (thus the single-quote characters), and they are in the keyword package. As I understand it, here is a point-by-point comparison: (1) With defun-keyed the user is saved the trouble of writing the single-quotes. He doesn't get to put forms there, but nobody ever does that anyway. (2) With defun-keyed, the parsing is done at compile time instead of run time, resulting in faster execution. (3) With defun-keyed, an actual argument position is created for every keyword option, and so function calls must always pass all those arguments. This could result in slower execution speed. (4) In the Lisp Machine, the keyword symbols have to be on the keyword package, but the names of the local variables have to be in the current package. (5) The syntax is different from the one we use now for keyword arguments. Points (1) and (2) are positive. I don't know how much of a problem (3) and (5) are. (4) could be solved by having the defun-keyed macro expander re-intern the symbols to create the keywords to be looked for; I think this would be OK.  Date: 8 October 1980 17:46-EDT From: George J. Carrette To: LISP-FORUM at MIT-MC, BUG-LISP at MIT-MC Re: side effects and a complr bug. There is a possible incompatiblity between lisp machine lisp and maclisp and NIL with respect to the handling of side effects. Here is a wallpaper file with an example: ((DSK GJC) LWALL /1) (defun mapcc (f v) (do ((l v)) ((null l) v) (setf (car l) (funcall f (pop l))))) MAPCC (setq a '(1 2 3 4 5) b '(1 2 3 4 5) f '(lambda (u) (list 'f u))) (LAMBDA (U) (LIST (QUOTE F) U)) ; test1 (mapcc f a) ((F 1) (F 2) (F 3) (F 4) (F 5)) ; neat (chomp mapcc) (MAPCC) ; test2 (mapcc f b) ;((F (F (F (F (F 1)))))) NIL CLOBBERED ;BKPT FAIL-ACT QUITQUIT b (1 (F 1) (F (F 1)) (F (F (F 1))) (F (F (F (F 1))))) ; ah, so thats what its doing! (walend) The maclisp expanders for SETF/PUSH/POP try very hard to guarantee the order of evaluatation as the lexicaly apparent order of evaluation, and to also keep the number of evaluations to 1. The macros themselves also try to avoid creating extra temporary locations, which is hard to do sometimes without error, I guess thats what causes the bug once MAPCC above is compiled. The DEFUN-KEYED macro of BAK is another example. The order of evaluation of the arguments is given by the macro definiton, not by the apparent calling of the function. Its easy enough to make sure the evaluation is as apparent in the call, although arguments for not doing this can be made in terms of efficiency of the run-time or the compiler. Question: What do you think we should do about all this? -gjc p.s. I just HAD to come up with something to get people off of DIGIT-WEIGHT!  Date: 8 October 1980 17:50-EDT From: George J. Carrette To: LISP-FORUM at MIT-MC, BUG-LISP at MIT-MC, BAK at MIT-MC Re: DEFUN-KEYED. A maclisp FASL file for BAK's DEFUN-KEYED is in GJC;DEFKEY FASL (All ya had to do was define DEL and give a *LEXPR declaration. Although I can image the frustration of trying to run COMPLR a few times on AI is enough to make you swear off.) kDate: 30 NOV 1980 2106-EST From: BAK at MIT-AI (William A. Kornfeld) Re: DEFSTRUCT and reclaimable storage [This msg principally meant for Lisp machine users but may be of wider interest.] There are a number of places in programs that I write in which I create structures (ART-Q arrays) of specific types copiously and know when I am finished with them. As such, it would take a load off the garbage collector if I could indicate that specific instances could be reused for structure instances to be created in the future.(*) The idea would be that if the structure was defined as: (DEFSTRUCT (FOO :RECLAIMABLE ...) ...) a function of one argument named RECLAIM-FOO would be defined. (RECLAIM-FOO ) will add to a linked chain of structures bound to the variable FOO-FREE-CHAIN. The linking is accomplished by having a designated cell of the structure point to the next instance. The function MAKE-FOO will be defined to first check if there is an instance sitting on FOO-FREE-CHAIN and if so use that one rather than consing a new one. Any Lispm system hacker interested in implementing this? (I found the code for DEFSTRUCT hard to grok.) * I realize that some people have a philosophical objection to doing this. I don't want to get into a debate about it. If you're curious I'll send you descriptions of the applications I have in mind for which it would be an extremely useful facility.  Date: 30 November 1980 22:03-EST From: Daniel L. Weinreb Sender: dlw at CADR7 at MIT-AI To: BAK at MIT-AI, LISP-FORUM at MIT-AI Re: DEFSTRUCT and reclaimable storage You needn't worry about intense philosophical disagreement from the Lisp Machine system hacker community; while automatic freeing of storage is nice, we all recognize that sometimes you don't need it and can gain a lot of efficiency by using manual returning. In fact, there is currently an installed Lisp Machine system feature called "resources" that do something like what you want. You can define a new resource, which gives the resource a name and says how to make a new object; then you can say (with-resource ( ) form1 form2 ...) which will lambda-bind to an object from the resource. When you exit the special form (with a normal or THROW-type exit), the item is returned to the resource pool. So the way you would do what you want to do is to define a new resource whose definition includes a defstruct constructor macro. This should not be built into defstruct, since you might also want a resource of objects that are not structures (arrays, flavor instances, 1000-long lists, etc.). I am not sure whether anyone has documented the functions and special forms of the resource feature. If I find out that they are not documented, I will document them.  Date: 30 NOV 1980 2233-EST From: BAK at MIT-AI (William A. Kornfeld) To: DLW at MIT-AI, LISP-FORUM at MIT-AI Re: DEFSTRUCT, reclaimable storage, and "resources" If I understand the Lispm resource feature, it requires allocation and deallocation to happen within some lexical contour. Many of the applications I have do not behave in a way that could take advantage of this facility. For example, I make use of a queue of linked structures (each one containing a pointer to the next). After a structure has "reached" the front of the queue (after some indeterminate time, and not within the lexical contour in which it was created) the structure is reclaimable. My suggestion still stands, although I guess it is possible that a new facility could be created that deals with more general objects than structures implemented as art-q arrays.  Date: 30 November 1980 22:46-EST From: Alan Bawden To: BAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DEFSTRUCT and reclaimable storage Well, I'm not actually sure that this is really something that belongs in defstruct. Defstruct already has an amazing amount of hair in it. (The code you were looking at is about to be replaced by a newer version with even more features.) Is there some reason that what you want has to be made part of defstruct itself? Is there some reason I can't see that prevents you from simply writing the alocator you describe? /Date: 29 August 1980 1728-EDT (Friday) From: Guy.Steele at CMU-10A Re: Generalized LET due to RMS I would like to express some support for a generalization of LET which is due to RMS, I believe. This permits the binding specification of a LET pair to be any expression constructed from data selectors (such as CAR, CDR, AREF, and selectors declared by DEFSTRUCT) *and* data constructors (such as LIST, CONS, and constructors declared by DEFSTRUCT). Note that a variable name is a kind of middle case halfway between constructors and selectors. Thus, for example, one could write: (LET ((`(,A B ,(CAR C)) (FOO 1 2))) ...) to mean that the call to FOO is expected to return a 3-list whose second element is the atom B. The variable A is bound to the first element and (CAR C) is bound to the third element. Note that this provides a simple and uniform syntax for binding things other than variables if one's implementation will support it. One can write (FUNCTION X) or (FSYMEVAL 'X) to bind the function cell of X; (GET X 'PROP) to bind the PROP property of the value of X; and so on. This method, using constructor expressions, is superior to the simple destructuring versions we have used in the past because it allows the destructuring and non-variable-binding aspects to be used together. Also, it allows destructuring of a data object (possibly user defined) for which there is yet no convenient reader syntax for writing an instance of the data object itself in the code. Thus instead of (LET ((#(A B C) ...)) ...) one writes (LET (((VECTOR A B C) ...)) ...); and if I invent a routine 2X2 which constructs a two-by-two array, I don't need to invent a reader syntax for such arrays so I can write it in a LET -- I just write (LET (((2X2 A B C D) ...)) ...). All I need is to set up a little data base for that constructor name and the appropriate selectors, such as for SETF. (The means for doing this should be standardized.) The MULTIPLE-VALUE-BIND construct can be absorbed by LET as a special case in this new implementation, which is rather flavorful. Suppose we let VALUES be a function which returns all its arguments: (DEFUN VALUES (&REST X) (PROG () (RETURN-LIST (APPEND X '())))) then one needn't use the crufty PROG syntax for returning multiple values: (DEFUN FOO (A B) (CROCK A) (VALUES (CAR A) (HACK B) (LIST B))) The function FOO returns three values. Then I can call FOO and save the three values by writing: (LET (((VALUES X Y Z) (FOO P Q))) ...) getting the three values in X, Y, and Z. For the theoretically minded, I can't help noting how reminiscent this scheme is to the operation of resolution.  Date: 30 September 1980 03:36-EDT From: Alan Bawden To: DLW at MIT-AI, Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: destructuring and &mumbles I don't see where &mumbles have anything to do with destructuring. As the person who first combined destructuring and &mumbles in defmacro, let me tell you what I thought I was doing: Sometimes you wanted defmacro to do destructuring, we had that and it was useful, but sometimes you wanted it to behave like defun and have optional arguments. Realizing that you could allow both syntaxes I wrote the defmacro that the LispMachine is still using, but the interaction between the two features is not smooth. For example, &mumbles are not allowed in places other than the top-level argument list, why not (defmacro yech (foo (bar &optional baz) &optional (qaz 'wsx)) ...) ?? Also why doesn't (defmacro fruity (&optional ((a 'apple) (b 'banana))) `(cons ,a ,b)) define a macro such that (fruity) expands into (cons 'apple 'banana) and (fruity (x y)) expands into (cons x y) ?? The reason "why" is because this is really a KLUDGE to allow defmacro to look like defun sometimes and destructure other times. The fact that (a &rest b) happens to mean the same thing as (a . b) is amusing, but misleading. I would never write (a (b c) d &rest e) since (a (b c) d . e) is more clearly destructuring. Nor would I write (a &optional b . c) instead of (a &optional b &rest c) since the latter is more clearly defunish. I don't think that &mumbles and destructuring combine to make any kind of reasonable pattern matcher. If the interaction had turned out to be amazingly elegant, then perhaps we would have moved the combined mess back into defun, but it ISN'T. In defmacro you frequently want BOTH features, but in defun you rarely need destructuring so there is no reason to clutter it up.  Date: 1 October 1980 03:54-EDT From: Daniel L. Weinreb Sender: DLW at CADR2 at MIT-AI To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: destructuring and &mumbles Actually, I don't think GLS was talking about the use of &mumbles in DEFMACRO; he was comparing the use of &mumbles in DEFUN to destructuring in DEFMACRO. The &mumbles in DEFMACRO didn't have anything to do with it.  Date: 1 October 1980 04:40-EDT From: Alan Bawden To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC Re: destructuring and &mumbles Date: 1 October 1980 03:54-EDT From: Daniel L. Weinreb Actually, I don't think GLS was talking about the use of &mumbles in DEFMACRO; he was comparing the use of &mumbles in DEFUN to destructuring in DEFMACRO. The &mumbles in DEFMACRO didn't have anything to do with it. The case was made that &rest had something to do with destructuring. I was giving my reasons why &mumbles have nothing to do with destructuring, my reasons have to do with the way they interact in defmacro (badly). GLS may have not been talking about &mumbles in defmacro, but I was. Can't I introduce some ideas of my own to the discussion? I can't believe it. I'm arguing about the correctness of my argument with the people I am trying to agree with!  Date: 1 October 1980 12:41-EDT From: Kent M. Pitman To: Dave Touretzky at CMU-10A cc: LISP-FORUM at MIT-MC, MACSYMA-I at MIT-MC Re: Dave Touretzky's message on macros, destructuring, and functions On the LispM, bvls for all primitive special forms take a standard format which never includes destructuring. DEFMACRO is not a primitive, but explicitly a macro-writing macro and so it does extra work for you. This is perhaps not ideal, but it is certainly the most consistent I have seen in any major Lisp dialect, including Interlisp. The destructuring capabilities offered by DEFMACRO need to be generally available, I will grant you that. Destructuring LET gives you that. Again, for uniformity, the default LispM LET does not destructure. ALAN does have a version available which does, however. Perhaps if there were a subr, DESTRUCTURE, which allowed one to turn ({destructuringbvl} . {body}) into ({nondestructuringbvl} . {destructuringbody}) then one could write trivial definitions like: (MACRO DEFMACRO (FORM) `(MACRO ,(CADR FORM) ,@(DESTRUCTURE (CDDR FORM)))) (MACRO DEF-DESTRUCTURING-FUNCTION (FORM) ; This needs another name `(DEFUN ,(CADR FORM) ,@(DESTRUCTURE (CDDR FORM)))) and the problem would be solved. In this way, the destructuring machinery currently available only to DEFMACRO (or at least not documented otherwise) would be able to serve a more general use, making things so easy to customize that we wouldn't need this discussion which seems only to be asking why destructuring is not as trivially available to SUBR-writers as it is to macro writers. -kmp  Date: 1 October 1980 13:04-EDT From: Kent M. Pitman To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC Re: Clearing up a fine point in your reply to Touretzky Date: 1 October 1980 04:02-EDT From: Daniel L. Weinreb ... To say that "A macro is still a function, just like a FEXPR is" is quite wrong; FEXPRs aren't functions either... I think actually that he meant a FEXPR is like a macro is like a normal function is like a special form ... etc. Ie, they all have at their core a thing called a LAMBDA. A better wording might have been: "...a macro has associated with it a normal function, just as does a fexpr..." I agree with the rest of your comments, however. -kmp  Date: 1 October 1980 14:08-EDT From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI To: LISP-FORUM at MIT-MC, MACSYMA-I at MIT-MC Re: Dave Touretzky's message on macros, destructuring, and functions What Kent suggests will certainly work. However, I was hoping to use the name DESTRUCTURE on the Lisp Machine to be a special form which is exactly destructuring LET. Then you can write: (defun foo (x) (destructure (((a b .c) d) x) )) to get the simple "destructuring DEFUN", but you could also take several arguments and destructure one (or many) of them, destructure results of function calls, etc, all with a very simple form. I'd like to see the LispM destructuring LET installed, but under the name DESTRUCTURE.  Date: 1 October 1980 17:35-EDT From: Jon L White To: LISP-FORUM at MIT-MC cc: DLW at MIT-AI, Guy.Steele at CMU-10A Re: destructuring within "argument" positions Commentary and BNF for Bound-variable-lists ocasioned by ALAN's note below, and all the other notes flaming around this issue. Date: 30 September 1980 03:36-EDT From: Alan Bawden Subject: destructuring and &mumbles . . . As the person who first combined destructuring and &mumbles in defmacro, let me tell you what I thought I was doing: . . . I don't think that &mumbles and destructuring combine to make any kind of reasonable pattern matcher. If the interaction had turned out to be amazingly elegant, then perhaps we would have moved the combined mess back into defun, but it ISN'T. In defmacro you frequently want BOTH features, but in defun you rarely need destructuring so there is no reason to clutter it up. "destructuring" and &mumbles are independent features of a bound variable list, and it is a sad commentary that two years after the consistent and simple notion of "pattern-matching-destructuring" for argument positions was introduced, there still has to be flaming about it ("elegant" in DEFMACRO, but "kludgy" in DEFUN -- really?) Admittedly, there is an alternate proposal for destructuring" -- RMS's idea of destructuring over programs rather than data forms -- but this does not change the incredibly simple syntax for a DEFUN/DEFMACRO with destructuring. Leaving aside for the moment the &AUX as irrelevant, and the potential for destructuring over Vectors in NIL, we can have the following BNF-like definitions: ::= any symbol usable as a variable (which excludes NIL and T in MacLISP) ::= | ( . ) ::= | (
) | ( ) ::= ( {}* {&OPTIONAL {}*1} {&REST } ) The name is for an argument position which is destructurable; is for a position which admits an initialization value when it is optionally not supplied (and also an optional "supplied-p" variable); the construct {...}* means 0 or more repetitions of the construct within brackets, and {...}*1 means 1 or more. This definition for is adequete then for both DEFUN and DEFMACRO, with the additional proviso that ( ... . ) is an additional kind of destructuring for DEFMACRO, which coincidently happens to be equivalent to ( ... &REST ).  Date: 1 OCT 1980 2132-EDT From: RMS at MIT-AI (Richard M. Stallman) I think that destructuring in DEFUN would be a reasonable idea. It's true that there are good ways to think of things if destrucuring is absent, but that is not a reason for not having it, because there are other good ways to understand things if destructuring is present. If there are good ways to understand it either way, we can take our choice based on other criteria and be assured that, either way, the result will be possible to understand in a good way. This is much better than arguing about which good way of understanding it is "right". However, I don't think that the Maclisp style of destructuring should be adopted at all. It closes out the entire syntactc space -- if you adopt it, you can't ever allow anything else, because there is no room for an escape. If we want to allow things besides destructuring in DEFUNs, LET, etc. -- such as, binding thh CAR of a list, which is perfectly possible on thh Lisp machine (and maybe in Maclisp too) but has n syntax to ask for it with -- or anything else we haven't imagined yet -- then destructuring must be done with a different syntax which leaves space for other things. If destructuring is done as follows: (defun foo (x (list z (list w y))) ...) or, equivalently (defun foo (x `(,z (,w ,y))) ...) then there is room also to allow (defun foo ((car x)) ...) which puts the argument in the car of x, or combine them as in (defun foo ((cons (car x) (cdr x))) ...) which displaces the argument into x. We can also handle multiple values with this scheme. If we have a function (VALUES x y z) that returns three values, x y and z, then we can do (setf (values x y z) ...) instead of using multiple-value, and we can do (let (((values x y z) ...)) ...) instead of using multiple-value-bind. I wonder whether this extends to defun in any way?  Date: 2 October 1980 1340-EDT (Thursday) From: Guy.Steele at CMU-10A Re: Destructuring syntax As JONL has pointed out, therte are only three &mumbles of interest. We can ignore &AUX by fiat, and &REST is the same (in some sense) as ".", so that leaves &OPTIONAL. I might suggest extending JONL's BNF slightly to allow non-top-level &OPTIONALs: ::= symbol-for-a-variable ::= | ::= | ( ) | ( ) ::= ( {}* [&OPTIONAL {}+] []) ::= . | &REST where square brackets mean 0 or 1 of, {...}* means 0 or more of, and {...}+ means one or more of. Adding &OPTIONAL into RMS's syntax is more difficult. I'm working on that. May not be worth it?  Date: 8 October 1980 14:41-EDT From: Jon L White To: CWH at MIT-MC cc: LISP-FORUM at MIT-MC, Touretzky at CMU-10A Re: Destructuring, GLS's and RMS's suggestions, and &EVAL. A little secret: Date: 2 October 1980 22:20-EDT From: Carl W. Hoffman Subject: Dave Touretzky's message on macros, destructuring, and functions I like your DESTRUCTURE function idea a lot. Could such a thing be implemented for MacLisp? The destructuring DEFUN has been in MacLISP for a long time -- since the LISP RECENT note of " Sunday December 9,1979 FM+5D.18H.52M.12S. LISP 1914/COMPLR 904 -JONL,RWK- " (with ocasional lapses of proper interfaceing between DEFUN and DEFUN/&, cough, cough!) For example, note this little example in the current MacLISP: (SETQ DEFUN&-CHECK-ARGS () ) (DEFUN FOO ( ((B . C) (D E)) A &OPTIONAL ((G . H) E /3RD?) &REST W) (mumble)) (GRINDEF FOO) ==> (DEFUN FOO G0005 (COMMENT ARGLIST = (((B . C) (D E)) A &OPTIONAL ((G . H) E /3RD?) &REST W)) (LET ((W (AND (> G0005 3) (LISTIFY (- 3 G0005)))) (A (ARG 2)) (((B . C) (D E)) (ARG 1)) G H /3RD?) (DESETQ (G . H) (COND ((> G0005 2) (ARG 3)) (E))) (AND (> G0005 2) (SETQ /3RD? 'T)) (mumble))) I have one or two comments on the extension Proposed by GLS Date: 2 October 1980 1340-EDT (Thursday) From: Guy.Steele at CMU-10A . . . We can ignore &AUX by fiat, and &REST is the same (in some sense) as ".", so that leaves &OPTIONAL. I might suggest extending JONL's BNF slightly to allow non-top-level &OPTIONALs: . . . Adding &OPTIONAL into RMS's syntax is more difficult. I'm working on that. May not be worth it? If I understand the BNF right, the extension would allow for forms like (LET ( ((A B &OPTIONAL (C '35)) ) ) ) which would differ from (LET ( ((A B C) ) ) ) only in that if is a list of length less than 3, then C is bound to '35 instead of (). This would restrict binding a variable with the name &OPTIONAL, which is a minor loss since such a variable can't now be bound in a DEFUN's variable list, leaving only an explicit LAMBDA as a way to bind this variable (yea, verry minor loss!). Presumably the former would be preferable to doing it by hand (LET ( ((A B . Z) ) (C (IF Z (CAR Z) '35)) ) ) A more serious problem to be faced is working RMS's destructuring syntax into the alread-extensively used MacLISP destructuring scheme: Date: 1 OCT 1980 2132-EDT From: RMS at MIT-AI (Richard M. Stallman) I think that destructuring in DEFUN would be a reasonable idea. . . . However, I don't think that the Maclisp style of destructuring should be adopted at all. It closes out the entire syntactc space -- if you adopt it, you can't ever allow anything else, because there is no room for an escape. . . . If destructuring is done as follows: (defun foo (x (list z (list w y))) ...) or, equivalently (defun foo (x `(,z (,w ,y))) ...) . . . With all respect, I'd like to offer an "escape" for the data-formatted destructuring which would allow intermixing of the program-formatted destructuring: &EVAL. If something like GLS's &OPTIONAL were to be adopted into the data-formatted scheme, then the precedent is already set for "escapes" in general. Thus two ways to write the "foo" above would be (defun foo (x (&eval (list z (list w y)))) ...) or, if "#@" could be another "quoter" producing (&eval ...), then (defun foo (x #@`(,z (,w ,y))) ...) In the program-formatted destructuring syntax (which I've been attributing to RMS), the "escape" to data-directed format could be, say, "E. Consider a "mixed-mode" in both schemes: data-formatted style: (DEFUN FOO (X (Z (&eval (list W Y)) ) ) program-formatted style: (DEFUN FOO (X (list Z ("e (w y)) ) ) The duality of "escapes" cannot be escaped here, and was probably the subject of some thesis or paper 15 (or so) years ago. However, there is very little at all to be gained by using the program- formatted destructuring only to destructure over data -- the data-formatted would always be more concise. But as RMS points out, one would not like to close the door to syntactic extensions which would allow binding something other than a variable, and the program-formatted is the more general choice here. One more possibility for extension is to include a little "matching" along with the destructuring -- for example, suppose one wants to destructure a list which is supposed to be a LAMBDA list into two parts, the Bound-Variable-List and the Body. He could say (DESETQ ( () BVL . BODY) ) or alternatively, (DESETQ (&eval `(LAMBDA ,BVL ,. BODY)) ) only in the latter form, the explicit quote on LAMBDA would mean not only that nothing is to be bound at this point (just as the () means in the former), but also that a check should be performed to see that the car of is equal to LAMBDA. But I digress -- I don't want to resurrect arguments over how much of a pattern-matching language should be built into LISP.  Date: 31 JAN 1981 1706-EST From: HENRY at MIT-AI (Henry Lieberman) Re: vote on destructuring I am definitely in favor of including destructuring in LET, DEFUN, and whatever contexts it is appropriate in. In fact, I think it would be a win if this were extended to provide full pattern-matching (for example, like Act 1 - I have a set of Lisp macros that essentially do this). There should be a way for the user to define new patterns by putting a property on a symbol which names the pattern. I would prefer that a list used as a pattern mean to bind the elements of the list to the elements of the pattern, rather than RMS's suggestion that they bind multiple values to elements of a pattern. I'd rather have a special pattern say, called VALUES for that. In general, I'd like to see a reduced dependence on multiple values, and I'd rather not have system functions return multiple values. First, transportability to other Lisp dialects suffers, but I agree that's a secondary consideration. More importantly, I think there are still some instances where multiple values are not smoothly integerated into the system (on LispM, and I assume MacLisp too). For instance, I have been screwed by things like the fact that if you write (PROG1 (FUNCALL SOME-FUNCTION) (OTHER-CRUFT) ...) without being aware that SOME-FUNCTION may return multiply, the construct isn't as transparent as you think it might be, and so on.  Date: 31 JAN 1981 1726-EST From: RMS at MIT-AI (Richard M. Stallman) To: HENRY at MIT-AI cc: LISP-FORUM at MIT-AI In case anyone else was confused, my suggestion was that the things that can appear in place of a symbol in a LET be exactly the same as what can appear in a SETF, with the same meanings. So a list of symbols does not mean binding multiple values. In fact, it doesn't mean anything in particular. Its meaning depends on the first symbol. If the first symbol is VALUES, then the rest are bound to the multiple values. If the first is LIST, then the rest are bound to elements of a list. If the first is CAR, then the second is evaluated and its car is bound.  Date: 2 February 1981 09:08-EST From: Jon L White To: DLW at MIT-MC, dick at MIT-AI cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC, CLR at MIT-MC Re: Destructuring syntax Dan, I think you're being far far too conservative in your opion about extending lambda-binding: Date: 2 February 1981 02:56-EST From: Daniel L. Weinreb . . . Destructuring is new functionality; there should be a new special form that does it. . . . Modifying all lambda-binding special forms to have a feature that has nothing to do with lambda-binding is what I object to. In the LISP RECENT note (now in the LISP NEWS file) dated "Jan 27,1979" there was presented a very succinct bnf syntax for "lambda-lists", which would take care of LET, DEFUN, DO, PROG etc. (see item 3b in that note). In fact, only DEFUN now allows the & keywords, and Quux later sent an addeddum giving some ideas on how &optional and &rest could be added to the others. This syntax is ** fully-upward compatible **, and entirely unabmiguous (as the bnf shows); The fact that many ordinary users are rudely suprised when they find it is unavailable on the LISPM shows that it must be a "natural" extension too. In fact, at other non-MIT sites this sort of "destructuring" in the lambda-list has been independently invented and used, but private conversations with these types usually meets extreme resistance to the & keywords (even now, KMP still objects to it, and has other ways of doing these things). As far as I know, LISPM/MacLISP/NIL is the only dialect supporting the & keywords, but lots of others support destructuring. I'd be interested in hearing some non-MIT, non-hacker ideas on this.  Date: 2 Feb 1981 1006-EST From: PDL at MIT-DMS (P. David Lebling) To: JONL at MIT-MC cc: DLW at MIT-MC, dick at MIT-AI, LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC Re: [Re: Destructuring syntax] MDL supports the &keywords in somewhat different form. The keywords are given as strings, and the keyword &rest is called "TUPLE" in MDL. Other than that, the syntax is identical. This is not surprising, since the LispM &keyword syntax was based on MDL. Dave  Date: 2 February 1981 17:09-EST From: George J. Carrette To: dlw at MIT-AI cc: LISP-FORUM at MIT-AI Re: destructuring When you say it would be *horrible* to have to introduce new functionality in the LAMBDA constructs, you mention that "lots of things in the existing world would have to be changed." As last I heard, there was only ONE lambda binding contruct in lisp, and that is LAMBDA. The lambda-binding done by DO and PROG PROGF and LET, can simply macro-expand into either LAMBDA or (perhaps call it) SUPER-LAMBDA. So the changes should be highly localized. Don't you see how you only increase your lossage more by introducing yet more special constructs, just because the existing ones have been implemented in a wedged (i.e. Stuck) manner? -gjc  Date: 3 February 1981 03:04-EST From: Daniel L. Weinreb To: DICK at MIT-AI, JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Destructuring syntax In the LISP RECENT note (now in the LISP NEWS file) dated "Jan 27,1979" there was presented a very succinct bnf syntax for "lambda-lists", which would take care of LET, DEFUN, DO, PROG etc. (see item 3b in that note). This syntax is ** fully-upward compatible **, and entirely unabmiguous (as the bnf shows); This has nothing to do with anything I said. I said nothing about disliking the syntax, or that it was ambiguous. I said, as you quoted me: Modifying all lambda-binding special forms to have a feature that has nothing to do with lambda-binding is what I object to. I object to the SEMANTICS. The fact that many ordinary users are rudely suprised when they find it is unavailable on the LISPM shows that it must be a "natural" extension too. This is ridiculous. All this shows is that people are using it in their code. My interpretation is simply that this "destructuring lambda" is an easy misconception for people to have. Just because lots of people think something does not make it right. The number of dialects that support it and the number of users who, having seen the announcement of the feature, started using it, does not affect my opinion.  Date: 3 February 1981 03:21-EST From: Daniel L. Weinreb To: GJC at MIT-MC cc: LISP-FORUM at MIT-AI Re: destructuring The Lisp Machine does not implement PROG and DO by expanding them into lambdas, and it would be unresonably inefficient for it to do so. So several things have to be changed. This is only a minor subpoint anyway.  Date: 3 February 1981 11:09-EST From: Kent M. Pitman To: DLW at MIT-AI cc: JONL at MIT-MC, LISP-FORUM at MIT-MC, DICK at MIT-AI, Guy.Steele at CMU-10A Re: Destructuring LET Date: 3 February 1981 03:04-EST From: Daniel L. Weinreb Re: Destructuring syntax From: JONL In the LISP RECENT note (now in the LISP NEWS file) dated "Jan 27,1979" there was presented a very succinct bnf syntax for "lambda-lists", which would take care of LET, DEFUN, DO, PROG etc... This syntax is fully-upward compatible, and entirely unabmiguous (as the bnf shows); Personally, I think the proposed bnf was terrible. I think it was a terrible idea to have introduced &keywords into Lisp in the first place. There were alternative ways of specifying things that would have been much more elegant. But that change is in a lot of code now and there's no real way to change it. I was highly disappointed in GLS for the support he lent to that discussion (2-Oct-80: Destructuring Syntax). His suggestions are usually more sound. Describing lisp syntax in terms of bnf is adding clutter to the language definition which has no reason to be there. The fact that many ordinary users are rudely suprised when they find it is unavailable on the LISPM shows that it must be a "natural" extension too. This is ridiculous. All this shows is that people are using it in their code. My interpretation is simply that this "destructuring lambda" is an easy misconception for people to have. Just because lots of people think something does not make it right. The number of dialects that support it and the number of users who, having seen the announcement of the feature, started using it, does not affect my opinion. Oh, it's fair to not worry about what people are using in their code? Then perhaps I should push the issue of &keywords again and ask people to change them. A number of people I have talked to agree they were a mistake, but feel they are too dug in to remove. Is there a double standard here? Maclisp programmers use destructuring LET extensively. The failure of the Lisp Machine to provide this feature deserves strong justification, since it means that every time we try to transport programs to the Lisp Machine, we have to either clobber the system LET, put the code on a separate package, or rewrite a whole pile of code. None of these alternatives are very attractive. Since you are not currently attaching any meaning to the syntax used by destructuring LET, you are in the best position to go for compatibility. I am not saying you should do so for the sake of compatibility, I am saying you should consider doing so for compatibility. Making code as compatible as possible is of large value to a lot of people. We really need to address that issue. At the very least, LispM lisp should provide SOME primitive with the exact semantics of Maclisp LET in the default system. It should preferrably be about 3 characters long so that major reformatting of files isn't needed. I disagree that anything more than LET and LET* need to do destructuring. Generalizing to make DO, PROG, etc. do the same thing is silly. I would, I suppose, be satisfied with DLET and DLET* which could be macros. If you decided to do this, then I would advocate RMS' suggestion not be adopted into the syntax of LET, since LispM->Maclisp compatibility would be destroyed by such an action in the same way as Maclisp->LispM compatibility is currently mucked up. -kmp  Date: 3 February 1981 1251-EST (Tuesday) From: Guy.Steele at CMU-10A To: Kent M. Pitman cc: lisp-forum at MIT-MC Re: Disappointment Sorry to have let you down, KMP. I wasn't terribly serious about adopting that BNF syntax; I merely put it forward as a logical extension in the spirit of exploring the design space. (At the end of my note of 2 October, I said "May not be worth it?".) Anyway, lest anyone misinterpret my opinions, let me say this: (1) In general, when I say something *could* be done, I do not necessarily mean at all that it *should* be done. (As an example, the note of 2 Oct was clearly labelled a suggestion, not an exhortation.) (2) In this specific case, I continue to support most strongly, as I have stated several times in the past, RMS' destructuring syntax. It is general, encompassing both destructuring and binding of SETF'able things in one framework. It is extensible, readily accommodating new kinds of destructuring because of the use of natural keywords (for example, using VALUES to handle multiple values). For the simple cases of lists, the keywords can be made invisible by using backquote (which is a visual aid to tell you something odd is going on, so you don't have to count parentheses after the word LET): (LET ((`(,A ,B ,@C) '(1 2 3 4))) (LIST A B C)) => (1 2 (3 4)) This also permits some syntactic variety: (LET (((VECTOR A B C) ...) ...) body) and (LET ((`#(,A ,B ,C) ...) ...) body) mean the same thing. Finally, there is no reason why macros cannot be allowed in the position of a variable in a LET for this purpose! As long as they expand into a valid recognizable case, it can still work (just as for SETF). So if I define (DEFMACRO FROBBOZ (A B C D) `(LIST ,A (VECTOR ,B ,C ,D))) then (LET (((FROBBOZ W X Y Z) ...)) body) makes perfect sense. (This last example leads to the possibility that a macro might want to know which of *four* contexts it is expanding for: EVAL, COND, PROGN, or SETF. Oh, well.) Anyway, *that* is my current and consistent opinion. Revelation 22:18-19 (but I don't mean to be pompous).  Date: 3 February 1981 14:09-EST From: George J. Carrette To: dlw at MIT-AI cc: LISP-FORUM at MIT-AI Re: destructuring Date: 3 February 1981 03:21-EST From: Daniel L. Weinreb The Lisp Machine does not implement PROG and DO by expanding them into lambdas, and it would be unresonably inefficient for it to do so. So several things have to be changed. This is only a minor subpoint anyway. That is a nice & fine blanket statement there, widely used to justify just about anything. Exactly what would inefficient? Code produced by the compiler? The compiler itself? The interpreter? System programmer effort? Maybe you can shed some light on this, and tell why the obvious way is not good in this case.  Date: 3 FEB 1981 1756-EST From: RMS at MIT-AI (Richard M. Stallman) Re: &keywords I agree that &keywords are ugly, and I wish someone could think of a clean replacement for them. Then I would certainly be in favor of replacing them. I have tried my best to think of a clean replacement and I can't really come up with one. I can come up with other ways of doing the job but they all have some other ugliness about them. GLS has also tried to find an alternative and has not come up with a truly satisfactory one. It's better to have the ugly &keywords than nothing at all.  Date: 3 February 1981 18:25-EST From: George J. Carrette To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: &keywords I thought your destructuring suggestion was a nice replacement for &keywords. Much simpler, and more general. I'll illustrate: (DEFUN FOO ((SPECIAL X) Y Z (FLONUM Y) (VECTOR 3 K) (VECTOR REST M)) ...) Now, DEFUN FOO says "have a function FOO which takes its arguments off the stack." The argument list tells how to "take them off". (SPECIAL X) ; bind the fluid variable X with the first argument. Y ; lexical variables. might as well stay on stack, but this Z ; does let the compiler choose register homes at will. (FLONUM Y) ; give this guy a flonum home, making sure it is a flonum! ; or else it wouldn't fit in the home. (VECTOR 3 K) ; take the next 3 arguments and have a lexical variable K ; point to them as a vector. (VECTOR REST M) ; M gets a vector of the REST of the arguments. (OPTIONAL Q) ; is not quite as pretty to implement, think about this one. Now, I see what I have given as happening at a pretty low-level. Things like SPECIAL declarations are handled as part of the USER-MACROLOGY, which is not at this level. Also, the form (QUOTE X) is not admitted in DEFUN. That kind of thing is handled in a formalism developed for semantic description of the system and extensions to the non-user level of the compiler. e.g. (DEFEXT PUSH (X (QUOTE Y)) (SET Y (CONS X (EVAL Y)))) So, the question is, does this all make practical sense? -gjc p.s. I've got a version of RMS's destructuring LET if anyone would like to see it.  Date: 3 February 1981 18:37-EST From: Kent M. Pitman To: RMS at MIT-MC cc: GJC at MIT-MC, LISP-FORUM at MIT-MC I would like to hear strong arguments opposing the following syntax (other than how ingrained it is into code -- we'll assume we are designing some new dialect that doesn't care about sharing code so that we don't quibble about that sort of thing): A bvl is either a symbol or a list. If it is a list, then the car of the list is a declaration, the cadr is the object of the declaration (which must be a symbol or another declaration), and the remainder are various attributes particular to declaration. Eg, (OPTIONAL var default varp) is a possible declaration. Likewise, (REST var) or (FIXNUM var). If you insisted on "e type functionality, you could even make it (QUOTE var) so that 'var would work. As a result, you would write (DEFUN F (X Y &OPTIONAL Z (W W-DEF) (WW WW-DEF WW?) &REST GUNK &AUX (A 3)) ...) as (DEFUN F (X Y (OPTIONAL Z) (OPTIONAL W W-DEF) (OPTIONAL WW WW-DEF WW?)) (LET ((A 3)) ...)) Note that the following: (DEFUN PROBABLY-SEVEN (&OPTIONAL (X 3) (Y 4)) (DECLARE (FIXNUM X Y)) (+ X Y)) would become (DEFUN PROBABLY-SEVEN ((OPTIONAL (FIXNUM X) 3) (OPTIONAL (FIXNUM Y) 4)) (+ X Y)) or (DEFUN PROBABLY-SEVEN ((FIXNUM (OPTIONAL X 3)) (FIXNUM (OPTIONAL Y 4))) (+ X Y)) This has a feature of being pre-parsed, and parsed in a structure which is useful to the various sorts of programs that need to interpret this stuff. It is somewhat larger in code size, but then so is (+ X Y) bigger than X+Y and we have gotten away from that for good reason. Comments appreciated. -kmp  Date: 3 FEB 1981 2214-EST From: Henry at MIT-AI (Henry Lieberman) To: LISP-FORUM at MIT-AI, KMP at MIT-MC Re: Vote on destructuring I favor the proposal of using lists in lambda lists to indicate special argument reception, over the present &keywords.  Date: 3 February 1981 23:38-EST From: George J. Carrette Re: guess what. Setting up "&" to have a readmacro function can give ( &REST BAR) => (REST BAR) (&OPTIONAL A B C D E &REST K) => ((OPTIONAL A B C D E) (REST K)) or whatever else you want. I've tried it in maclisp, and its quite easy for it to TYIPEEK for the next "&" or ")" so that it can know what to do. If one is willing to have "read-tokens" in the reader, then then the entire character "&" need not be commited. Either way you look at it, the reader is a much more natural place to put such a highly syntactic frobulations as &Keywords. -gjc  Date: 3 February 1981 23:42-EST From: Robert W. Kerns Re: & as reader syntax Talk about the cure being worse than the disease!  Date: 3 February 1981 23:25-EST From: Edward Barton Re: &keywords Just to make sure I'm clear on the discussion: Is it being proposed that &keywords should be eliminated and will no longer work, if someone comes up with a "clean" replacement for them? Or is it being proposed that some new construct be introduced and that the &keywords stay around to avoid breaking the code that uses them? My fault for inattention if this has already been clarified.  Date: 4 February 1981 0009-est From: Barry Margolin To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: guess what. Setting up "&" to have a readmacro function can give ( &REST BAR) => (REST BAR) (&OPTIONAL A B C D E &REST K) => ((OPTIONAL A B C D E) (REST K)) or whatever else you want. This is not so good, because &keywords are only defined to have these meanings in bvl's, but using them as readmacros causes them to be expanded all the time. True, how often do we use &, so it could be slashed, but I'm still against having a readmacro that is only useful in a special place. barmar  Date: 4 February 1981 0016-est From: Barry Margolin To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: Your mail of 3 February 1981 18:37-EST As a result, you would write (DEFUN F (X Y &OPTIONAL Z (W W-DEF) (WW WW-DEF WW?) &REST GUNK &AUX (A 3)) ...) as (DEFUN F (X Y (OPTIONAL Z) (OPTIONAL W W-DEF) (OPTIONAL WW WW-DEF WW?)) (LET ((A 3)) ...)) Question: why get rid of &AUX. I think it is useful to have all the variables that are being used in a function in one place, the lambda list. Then, when I am reading a function I know exactly where to look to find out these things. I don't know if an &SPECIAL already exists, but I would vote for it or (SPECIAL FOO) in this scheme. (shades of PL/I!!) barmar  Date: 4 FEB 1981 0646-EST From: RMS at MIT-AI (Richard M. Stallman) I like KMP's argument syntax suggestion. It has one disadvantage: OPTIONAL has to be repeated with each optional variable. That's cleaner to parse, but painful. I woudl rather allow just (OPTIONAL) with no variable name to mean "following variables are optional by default". Getting rid of AUX variables is ok with me as they can be turned into a LET* with a trivial change. It should be possible to have a TECO macro to do that part just like the rest. For a change-over period, we could have the &-keywords recognized with a warning "Do M-X RunKWDCNVConvert Keywords in EMACS to convert DEFUN syntax". After a couple of months, this could also mail a message to BUG-LISPM saying that the file needs to be fixed by someone. This would enable the fraction of files with no maintainer any more to get fixed up. Even if we never flush the current syntax, it would still be good to switch to a clean one.  Date: 4 FEB 1981 0949-EST From: WJL at MIT-ML (Bill Long) To: LISP-FORUM at MIT-ML Re: Definition Syntax If anyone is interested in the results of a couple years of experience with a syntax like the one that has been proposed over the last few days, they might take a look at the LSB (Layered System Building) manual. The basic syntax for defining routines is: (define-public-routine (foo a (fixnum b)(unquoted c)(optional d 'd d-supplied) (one-or-more-of e)) (aux-bindings (temp1 99) temp2) (declarations ...)) In place of public, there is private and intrasystem defining the extent to which the routine should be available. In place of routine there are macros, open-codable-routines, and compile-time-macros. There are also other keywords and reasonable defaults and there are similar facilities for defining special variables. -Bill Long  Date: 4 February 1981 10:10-EST From: Jon L White To: KMP at MIT-MC cc: RMS at MIT-MC, GJC at MIT-MC, LISP-FORUM at MIT-MC Re your query: Date: 3 February 1981 18:37-EST From: Kent M. Pitman I would like to hear strong arguments opposing the following syntax . . . A bvl is either a symbol or a list. If it is a list, then the car of the list is a declaration, the cadr is the object of the declaration (which must be a symbol or another declaration), and the remainder are various attribute . . . Say, wasn't it you who objected to BNF presentation of syntax? Aren't you now using an informal, and hence less precise, version thereof? But seriously, there are two important considertions, the first being succinctness of program presentation, and I'd like to offer a critical comment as well as a constructive suggestion related to "succinctness". To begin with, I can't agree that the huge difference in "amount of paper" needed to present the argument list in (DEFUN PROBABLY-SEVEN (&OPTIONAL (X 3) (Y 4)) (DECLARE (FIXNUM X Y)) (+ X Y)) and that needed in (DEFUN PROBABLY-SEVEN ((FIXNUM (OPTIONAL X 3)) (FIXNUM (OPTIONAL Y 4))) (+ X Y)) is on a par with the loss of "X+Y" to "(+ X Y)" Neither is the notion of being "pre-parsed" important -- there should clearly already be some function which takes a DEFUN arglist (and another function taking a DEFMACRO arglist) and standardizing it into a LAMBDA or LET format. Another example of succinctness is in the "destructuring" notion which has been bandied about recently: notice the difference in coding between the MacLISP/NIL format (DEFUN FOO ((A B C) Y) ...) and without it (DEFUN FOO (X Y) (LET (A B C) (SETQ A (CAR Y)) (SETQ B (CADR Y)) (SETQ C (CADDR Y)) ...)) *** suggestion ***, wouldn't it be desirable to take another # character and use it for generating the kind of argument declaration wanted? That way, people who like the "stickiness" of the existing &keyword style should be kept happy, and a compatible extension of the data-directed destructuring could be done without disrupting anyone's existing code. For example, let us suppose that the data-directed destructuring format as shown above "loses favor" with users, and they really need some program- directed format, e.g. (DEFUN (`(,A ,B ,C) Y) ...) Of course it's pointless to introduce, incompatibly, such a program-directed format if its only use it to re-do the data-directed format with more input characters required. But for sake of discussion, let us suppose that some extension is to be done. Then, maybe we could let #@ generate the declaration for program-directed destructuring, and #&x, #&f, #&o, #&s, and #&r as a prefix for other "declarations", respectively "fixnum", "flonum", "optional", "special", and "rest". E.g. (DEFUN FOO (#@`(,A ,B ,C) #&x Y #&o #&f Z ((Q . R) (INIT))) ...) would read into your proposed format like (DEFUN FOO ( (destructure (list (quote A) (quote B) (quote C))) (fixnum Y) (optional (flonum Z)) (optional (destructure `(,Q . ,R)) (INIT)) ) ...) [Incidentally, notice my convention of using case information to help distinguish true variables being bound from the declarational "words"] Now, in order to dissuade the usual inane replies about how awful this # syntax appears, let me say as did GLS that pointing out possibilities doesn't mean hot advocacy. The point is that the succinctness and "stickiness" of the old style can be preserved by read-macro-ifying into your list style; I don't like typing 4 lines and 142. characters when 1 line and 48. characters is just as clear. Automatic program analyzers, like GRINDEF, could display either format, just as it now has the option of printing either `(,A ,B ,C) or (LIST A B C); thus I as a user would like to be able to write my code either in terse or in verbose format. Now to my second consideration. The proposal for program-directed destructuring as opposed to data-directed, was made partly on the supposition that someone, somday, might want to LAMBDA-bind some random car of some list cell (as opposed to binding the variable found in the data pattern). This would be a true extension, not now primitively obtainable. Does your proposed syntax buy us anything new? If not, why try to force everyone to conform to it -- why not just provide a macro facility which expands your format into existing primitives?  Date: 4 February 1981 12:23-EST From: George J. Carrette To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Having (OPTIONAL) with no variable name mean "following variables are optional by default" has the same problem problem as &kewords. That is, it forces a kind of parsing on any function-defining special form. More parenthetical would be ((OPTIONAL X) (OPTIONAL Y)) <=> (OPTIONALS X Y). You can even have AUX variables with no problem, (AUX FOO) or (AUXS FOO BAR BAZ). Also, this gives a way of saying (DEFUN FOO (X Y (AUX Z (+ X Y)) P Q (OPTIONAL R (* Z 4))) ...) Think about how often the syntax of a language limits the possible things one can say in it. It may make the things one says most often shorter to say, but it is still nice to have a uniform, very lispy semantic level. Here is a paraphrased quote from the CGOL manual (of all things) : "one of the best things about CGOL is that it allows you to escape into lisp syntax easily, with the "!" operator, for those cases where you can't easily figure out how to say what you want to say in the CGOL syntax" On that note, what is the feeling on having "&" be a readmacro? -gjc  Date: 4 Feb 1981 1310-EST From: Dave Andre To: GJC at MIT-MC cc: DLA at MIT-EECS, lisp-forum at MIT-MC Re: & as a reader macro. With all the language purity and philosophy arguments I hear from GJC, I really wonder sometimes.  Date: 4 FEB 1981 1401-EST From: ACW at MIT-AI (Allan C. Wechsler) Re: New argument-list syntax proposal. OK. Look at this, folks. (DEFUN FOO ((OPTIONAL X) Y) ... ) So the first argument is optional, right? And the second argument is mandatory, right? Are your brains all broken? WHAT DOES THIS FUNCTION DO WHEN IT GETS ONE ARGUMENT? Is it an error because the second (mandatory) argument is missing? What weird heuristics will you use to disambiguate an arglist like: (X Y (OPTIONAL Z W) Q R) You may say, "Well, we'll make it illegal to precede mandatory arguments by optional arguments." But if we do that sane thing, then we are burdened by zillions of (OPTIONAL ...)s at the end of the arglist. So we factor out the OPTIONALs ... and end up with a notation a lot like the present one. ---Wechsler  Date: 4 February 1981 16:05-EST From: Howard I. Cannon I am STRONGLY against changing the DEFUN arglist syntax.  Date: 4 February 1981 17:10-EST From: Kent M. Pitman Re: (OPTIONAL ...) &RELATED-ISSUES Gads, I have a pile of mail from people responding to KMP's proposed change of &keywords. I am glad for the support and I would love to not have to use &optional and &rest, but I feel I should note that my original note didn't propose changing things -- I was discussing an imaginary implementation where no existing code cared. Anyway, I have a few comments on mail that went by today... * It's hard to tell what they are. There's no actual rule forbidding &'s in symbols. Imagine how #'(lambda (&bigfloat x) (list &bigfloat x)) behaves functionally on the LispM now, and how it might behave someday if bigfloats were ever introduced. That's the bad thing about parsers. * It's hard to tell what the scope is. While visually, (a &optional x y &rest z) might be easy to scan, imagine how a poor program feels about it. It needs a parser. Think about this. In your mind, you had grouping around the x and the y, then you threw it away when you wrote it down because you figured the program should be able to infer easily what you meant. Heck, the reason we use (+ X (* Y Z)) instead of X+Y*Z is that it's what we mean and we know it's what we mean and there is no sense in throwing away info that you have. DLA, GJC's suggestions of an &-readmacro comes close, but it still means you are throwing away info you had and then asking the readmacro to reconstruct it. Note this is info you *really* were conscious of at the time you wrote it down. Please don't hit me with ``but if you don't want to trust the machine, why not specify bit patterns?'' because that is something that i am *not* conscious of -- that is something it would take me work to figure out. Besides, I think GJC's &-readmacro thing would parse ("e x &eval &rest y) as ((quote x) (eval) (rest y)) which serves to illustrate my point. A little parentheses go a long way. RMS, I agree 99.2% with GJC's comment that (OPTIONAL) is as bad as &OPTIONAL. Suppose I tell you there is a new keyword &FOO. Does my use of (&foo x y) tell you whether I mean ((foo x) (foo y)) or ((foo x) y)? I don't think it does. Suppose I were doing: (defun foo (flag &foo x y) (if flag ...code involving x only... ...code involving y only...)) then if you knew flag was always nil, you might be able to look at the code and not worry about what the foo declaration was at all, if you knew that &foo wasn't a sticky flag. If I said (defun foo (flag (foo x) y) (if flag ...code involving x only... ...code involving y only...)) then you don't have to know what the foo declaration does at all, as long as you know flag is nil. This, by the way, is related to why CASEQ is such a winner and computed GO's are such a loss. Other than grouping, there really isn't a lot of difference between them -- but boy does that sort of constraint buy you a lot! GJC, Your suggestion of (OPTIONALS ...) and (REST ...) is better visually, but has the disadvantage that it is not so nicely recursive in nature. It looks to me like my definitional syntax: (defun f (x y (optional (fixnum x) x-default x-p) (optional w) (optional (fixnum ww) w-default) (rest (special r))) ...) would look like: (defun f (x y (optionals ((fixnum x) x-default x-p) w ((fixnum ww) w-default)) (rest (special r))) ...) which is bad because in mine the body of the OPTIONAL and REST clauses take the same sorts of fillers, but in yours, the bodies have different types of things in them. This seems to me to be a rather severe weakness. Additionally, yours is hard to do certain code-transformations on using a single level of MAPC/MAPCAR/MAP's. It requires mapping down the subcomponents in a way I think is much hairier. It might well be reasonable to make certain common declarations shorter. eg, (defun f (x y (opt x xd xp) (opt y) (rest z)) ...) instead of (defun f (x y (optional x xd xp) (optional y) (rest z)) ...) might go a long way. Also, a readmacro char that really did specify what you wanted ... (defun f (x y #:2O (x xd xp) y #:R z) ...) That might take some getting used to. Note that it allows for LSB's one-or-more-of hack by things like: #:1,O ; means 1 required, up to infinity allowed #:2,5O ; means 2 required, up to 5 allowed These are just ideas. I agree that the problem of repeating the key is an annoying one, but I feel that it's not the most severe problem being addressed. This loss of ability to MAPCAR down them because they need to be parsed and indeed the need for a parser to begin with is a more severe issue in my mind. -kmp  Date: 4 February 1981 17:43-EST From: Glenn S. Burke To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: (OPTIONAL ...) &RELATED-ISSUES As to "stickiness" of OPTIONAL, what LSB does is to not interpret what looks like a required arg any differently when it follows an optional arg, EXCEPT to say it is optional. So the syntax doesn't really suffer, other than that if you want to specify a default value or supplied-p var you have to say OPTIONAL again: (define-public-routine (foo a (optional b) c (any-number-of frobs)) ...) ==> (define-public-routine (foo a (optional b) (optional c) (any-number-of frobs)) ...) One can also meaningfully have different type wrappers both "inside" and "outside" what we call the call-mapping-keywords (eg OPTIONAL): (foo a b (vector (any-number-of (fixnum frobs) number-of-frobs))) which says that the variable FROBS is of type VECTOR (and presumably should be implemented thusly), but the ARGUMENTS which map into the parameter FROBS are FIXNUMs. NUMBER-OF-FROBS gets bound to just that, and is automagically declared FIXNUM. In LSB it actually works to have this split-typing for OPTIONAL args too; (define-public-routine (foo a (notype (optional (fixnum b) 'ugh))) ...) which says that the second argument to FOO, if given, MUST be a FIXNUM. But the variable B is NOTYPE, and might be bound to UGH. I don't think this last grotesquery has ever been used, but it sort of follows from the way these things are interpreted. I am not advocating that any of this should be put into DEFUN. The LSB formalism considers DEFUN to be low-level and pretty implementation dependent (think of pdl-vectors, pdl-lists). What one does in LSB is to describe what a "call" to a routine should be like, and how that call maps into the bindings of the parameters. Given (define-public-routine (foo x (optional y)) ...), whether the mapping from (foo a) => (lambda (x y) ...) is made at compile or run time is left to LSB, since it can then make a reasonable default choice based on the particuclar lisp implementation. In the same way, one can suppress evaluation of selected arguments without defining a fexpr (define-public-routine (my-status (quoted kwd) (any-number-of other-frobs)) ...) and a call to MY-STATUS will compile correctly (assuming of course no calls to EVAL).  Date: 4 February 1981 20:10-EST From: Earl A. Killian To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: parsing I certainly don't prefer (+ X (* Y Z)) to X+Y*Z because it is easier to parse. I think that is a non-issue. The reason that the lisp notation is a win is in its uniform treatment of things like + and * as functions or macros, making it an easily extensible syntax.  Date: 4 FEB 1981 2314-EST From: RMS at MIT-AI (Richard M. Stallman) It is true that (OPTIONAL) requires a parser for the arg list, but this is not an important disadvantage. It is easy to write that parser (I know; I've written it a few times). That is not what I think is wrong with the existing & keyword syntax either. The problem with &keywords is that they are exceptions. If a symbol in a certain context is treated in a uniform manner (such as, by binding it, whatever it is) then it is unclean for any symbol in that context to mean anything else. It should be possible, when parsing code, to tell whether what appears in a certain context is supposed to be a keyword or not before you see what the symbol is. Either the context requires a keyword, in which case a symbol that isn't meaningful as a keyword of that type is an error, or the context wants a non-keyword, in which case ANY symbol is allowed and all symbols are treated the same way.  Date: 5 February 1981 13:09-EST From: Kent M. Pitman Re: More concrete remarks on &keywords... GSB: Personally, I think your LSB syntax suffers from the fact that the keyword information is sticky. I still think that's a disadvantage. I would prefer to make required args after optionals illegal or simply fill in optionals from left to right. eg, (defun f (x (opt y) z (opt w)) ...) (f a b) => x:a, z:b (f a b c) => x:a, y:b, z:c (f a b c d) => x:a, y:b, z:c, w:d (defun f (x (rest y) z (opt w)) ...) (f a b) => x:a, z:b (f a b c) => x:a, z:b, w:c (f a b c d) => x:a, y:(b), z:c, w:d (f a b c d e) => x:a, y:(b c), z:d, w:e tho' I can perfectly well understand why people might not want to do this and I would have no interest in pushing it ... making these odd cases illegal wouldn't bother me. RMS: Let's assume a parser were ok. Doesn't (OPTIONAL) suffer from the need for a way to specify defaults. Eg, (defun f (x y (optional) (z 3) (zz) (rest) q) ...) doesn't let you tell what a keyword is. Perhaps MDL's idea of strings would be a better one if you want to go this route. (Maclisp could be made to win even with "..." becoming a symbol in the current default environ since the symbol is tagged with a +internal-string-marker property...) I do have application for and would really like to see (I am re-entering the real world here, no longer speculating) a canonical form which programs could easily map over, and some primitive available to turn things into that form (the same primitive and canonical form in each dialect would be helpful) so that I could do something like: (PARSE-DEFUN-ARGLIST '(FOO &OPTIONAL "E (X 3) &EVAL Z &REST BAR)) => (FOO (OPTIONAL (QUOTE X) 3) (OPTIONAL Z) (REST BAR)) and (FORM-DEFUN-ARGLIST '(FOO (OPTIONAL (QUOTE X) 3) (OPTIONAL Z) (REST BAR))) => (FOO &OPTIONAL "E (X 3) &EVAL Z &REST BAR) This would guarantee that people would never have to write their own parsers for these things and that in spite of whatever hacks had been done to make the bvl more visually aesthetic (eg, &keywords), there would be a standard interpretation defined by this set of conversion primitives. Hence, one could write (DEFMACRO DEFINE (NAME-STUFF ARG-LIST &REST BODY) `(DEFUN ,NAME-STUFF ,(FORM-DEFUN-ARGLIST ARG-LIST) ,@BODY)) (DEFINE F (X (OPTIONAL Y)) ...) and win just as well as those people who prefer the ¬ation or whatever notation alternative notation FORM-DEFUN-ARGLIST produces. Further, although programs can't do (MAPCAR #'(LAMBDA (ELT) ...) ARG-LIST) when doing code-transformations, compilations, etc., they could do (MAPCAR #'(LAMBDA (ELT) ...) (PARSE-DEFUN-ARGLIST ...)) and still win -- and I don't consider this to be an excessive cost since only one parser function would have to be kept up to date -- the system version. -kmp kDate: 4 FEB 1981 1458-EST From: ACW at MIT-AI (Allan C. Wechsler) Re: Everybody and his brother is a language designer. No language will ever be perfect. There have been several major improvements to MacLisp and its cousins in the last several years. Most of these improvements can be traced back to certain central insights about Lisp that started happening about ten years ago. Sometimes, very young languages go through such changes. Usually there is so little extant code in these languages, all of it of an experimental nature, that programmers are willing and eager to have the experience of rewriting their old code in the new way. When such changes happen to older languages with a significant body of nontrivial code already written, the result is usually a new dialect with no contract for upward compatibility. But the incorporation of advanced macro/structure features into Maclisp happened in the full light of day where everybody could see it. Further, by luck it happened that these changes were fully upward-compatible. This led the Maclisp user community to believe that making radical changes to an established language was the Thing To Do. Confusion about the relationships between Maclisp, Lisp Machine Lisp, and NIL, have added to the tower of babble that has resulted. In particular, the Lisp Machine design group has never acknowledged any contracts of compatibility with Maclisp, beyond a commitment to maintaining some basic commonality of structure with a subset of Maclisp as it was a few years ago. Neither Maclisp nor Lisp Machine Lisp are in the frenetic early development stage which recent messages to this mailing list would characterize. Both are established languages with large (and SEPARATE) bodies of stable, working, extant code. Major changes (such as changes in the nature of LAMBDA-expressions) should take place only after careful, sober, and SLOW consideration by people who intimately understand the structure of the language in question. We should expect (and hope) that such changes will take place rarely and gently. Both languages are already rich enough to write any code your little heart desires. There are NO proposed changes for which there is anything approaching urgent need. You can indeed participate in the (slow, careful, but continuing) language design process. But a little more respect for A) the stability, power, and grace of the existing languages, and B) the ability and perceptiveness of the implementors and maintainers might be in order. As the flaming on this mailing list (yes, by sending this, I know I incriminate myself) has escalated to the point where simply reading it involves a major time commitment, these implementors and maintainers have started to lose interest in participating in the forum. Some consider such participation to be a waste of time. DLW could have repeated his arguments against "destructuring LAMBDA", but presumably decided that it wouldn't be worthwhile. Not everyone is a language designer. Most have simply not thought deeply about the issues pertaining thereto. But some people have. Try to remember that.  Date: 4 February 1981 18:11-EST From: Kent M. Pitman To: ACW at MIT-MC cc: LISP-FORUM at MIT-MC Re: Meta-Flame about flaming: Reply to ACW The Lisp Forum, while time consuming to read, at least serves as a public record of thoughts on various subjects that can be reviewed at a later time to prevent various subjects from repeatedly wasting people's time in private conversations. As alluded to in my mail on case, this conversation has come up many times with many people individually :BUG'ing Maclisp. It is my hope that in presenting such views publicly, I will have something to refer people to at a later time, rather than repeating myself. You may feel free to read this mail, but please do not complain about its volume. If you feel it is a waste of time, just remove yourself from the list. I feel that in spite of the time drain it costs, it has been of tremendous value to have an open channel of communication between the various Lisp design groups, and I know of others who feel similarly. Additionally, although there are no contracts for compatibility between the language designers, please do not forget that gratuitous incompatibility is something to be avoided, and this is another thing we seek to avoid. There are many in the AI lab who would like to move code from AI to the LispM, but can't because of incompatibilities. The LispM has been quite a barrier to Macsyma, my Fortran->Lisp translator (sigh), and other things we have attempted to bring up on the LispM for the benefit of LispM users. Recognition of where the real stumbling blocks exist, why they exist, and how they can be dealt with -- including the possibility of eliminating the ones which oughtn't be there -- is an important result of this Forum. Sometimes the point isn't even to change anything -- sometimes the point is just to make a mental note to ourselves that if we ever redo it from scratch, we should do certain things differently. Some decisions of that nature will hopefully be useful at some time years from now -- or even in the more near term to outsiders if we were to make these discussions available to prospective implementors who haven't had time to think in this level of detail about all the issues that have gone by. In particular, your closing statement: ``most have simply not thought deeply about the issues pertaining thereto'' is both haughty and false. Not everyone has thought enough about every issue, but it is not always the same people. Everyone has something to gain except those with closed minds -- and those people may feel free to remove their names from the delivery list. Certainly some small issues have received more mail than they deserve, while worse issues may receive less attention than they merit. This sort of thing is prone to happen in any open discussion. Nevertheless, nearly all the persons participating in these discussions have been involved in the mainstream of the design and implementation of compilers and/or interpreters for some large Lisp or Lisp-based system. I felt a strong implication in your note that lisp-design groups should be left to go out and design languages without the interference of the lowly masses, who should be content to go out and write code using whatever the designers see fit and who should stop pretending to know enough to claim that a design decision was bad. The line between the language designer and language user is fuzzier by the day, so I feel such an attitude (if that is indeed your feeling -- I suspect I have misinterpreted, though I can't infer what you might have been saying) is a relic of the past. -kmp  Date: 4 FEB 1981 2049-EST From: DLW at MIT-AI (Daniel L. Weinreb) I agree with Allan.  Date: 4 February 1981 21:15-EST From: George J. Carrette To: DLW at MIT-AI cc: LISP-FORUM at MIT-AI Date: 4 FEB 1981 2049-EST From: DLW at MIT-AI (Daniel L. Weinreb) I agree with Allan. Well, you people are really something. ALLAN made a statement before he sent that note along the lines of "its better to insult a group of people then to single out individuals". I remember sending a note to BUG-LISPM a while back about how destructuring was missing from DEFUN, but was in DEFMACRO. [Timely note: Recall the recent flaming about how the Lispm people have been trying to keep DEFMACRO compatible with DEFUN] Mind you, I had no idea of the amount of AXE-GRINDING involved in that when I sent the note! What reply did I get from DLW, well, you started to explain to me about CAR and CDR and about what computers did. Typical. If you can't explain something, try and make the person who asked the question feel as small as possible, and maybe they will go away. Why don't you call a spade a spade here. The objection is that many people have been talking about things which they have no STAKE in. -gjc Date: 1 October 1980 20:25-EDT From: Jon L White To: Guy.Steele at CMU-10A cc: NIL at MIT-MC, LISP-FORUM at MIT-MC Re: DIGITP not yet resolved? Last month there was a lot of discussion about the name "DIGITP" -- has any proposal met with general approval yet? If not, how about DIGITP -- returns only "True" or "False" (however encoded) and is "True" only for the 10 decimal digits. DIGIT-WEIGHT -- returns "False" only for characters which are not used as digits in any of the bases 2 - 36.; otherwise extracts the "weight" and returns it as a fixnum. Should either (or both) signal an error when the argument is not implicitly a character? I think perhaps DIGIT-WEIGHT should; but DIGITP could, like FIXNUMP, be a total function.  Date: 1 October 1980 22:20-EDT From: Daniel L. Weinreb To: JONL at MIT-MC cc: NIL at MIT-MC, LISP-FORUM at MIT-MC Re: DIGITP not yet resolved? This sounds OK except that there is no way to get DIGITP to work in non-10 bases; certainly it should do what you said in the simple case. It should either take an extra IBASE argument or use IBASE as a global parameter, I guess.  Date: 2 October 1980 00:17-EDT From: KMP at MIT-MC, JAR at MIT-MC Sender: KMP at MIT-MC Re: DIGITP Jonathan and I have put together our thoughts on the issue. We begin by summarizing the issues (perhaps oversimplifying somewhat, for the sake of conciseness) of the DIGITP controversy: * JAR suggested in CHPROP that DIGITP might want an optional 2nd arg defaulting to decimal 10. * RWK complained that (DIGITP #/q) returns non-(). * JONL noted that DIGITP returns info about the weight of a digit and that this needs to be compared to IBASE to achieve a useful end. eg, (LET ((VAL (DIGITP x))) (AND VAL (< VAL radix) VAL)) JONL's comments were repeated a number of other times, throughout, pointing repeatedly to CHPROP and its descriptions of digit weights. * Moon suggested that perhaps DIGITP should look at IBASE to start with. * GLS brought up the idea of a 2nd arg to DIGITP again citing the following points: 1. (DIGITP #/q) => () would be useful to novices 2. (DIGITP #/q IBASE) => 26. would accomplish JONL and Moon's reader needs more concisely than the code JONL suggested. 3. (DIGITP #/q x) => ... could be used for arbitrary radices independent of what the lisp reader, etc. would want to do with its digitness. * KMP agreed with 2nd arg idea, noting he had rarely wanted non-{0,1,...,9} DIGITPness anyway. * GSB agreed with 2nd arg idea, noting he had never wanted non-{0,1,...,9} DIGITPness anyway. * RMS said he didn't care what DIGITP did with a 2nd arg, but that with one arg it must return non-() only for 0-9. * DLW agreed with GSB. ------ Now it seems to us that the problem stems from the fact that there is this winning name called "DIGITP" and everyone wants to see to it that it has maximally useful semantics without screwing things up. Domains of use include `machine language' readers (Lisp-like readers, parsers, etc.), human language readers, and user queries like "Input a number:" -- to name just a few. Naive users, especially those experienced with languages that have no other base available than 10 are likely to be confused by the notion of non-{0-9} digits. Experienced users will want something that can handle different ranges of digits and [perhaps] return useful information about the digit as well. It would seem to us that the following solution presents itself: (DIGITP char &optional radix) If no radix is supplied, DIGITP returns true (T or #T, as appropriate) for the characters 0 through 9, and () for other characters. (This would reduce confusion for novices who expected it to be a simple predicate. If a sophisticated user thinks this should do something else, maybe he wants to look at what (DIGITP char 10.) will do in this formalism.) If radix is supplied, DIGITP returns () if the character is not a digit in the input base specified by radix. If the character is a digit in that radix, then its weight as a digit is returned. (DIGIT-WEIGHT char) Returns the weight of its argument as a digit, independent of base. eg, (DIGITP #/A) should return 10. regardless of any radix considerations. (DIGITP #/a) should also return 10., since "a" would be the same as "A" in number reading. (DIGIT-NAME value) Returns the character which represents the digit whose value is given by its argument. eg, (DIGITP 10.) => #/A or ~A or whatever. -kmp  Date: 2 October 1980 13:12-EDT From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI Re: KMP and JAR proposal for DIGITP Sounds OK to me.  Date: 2 October 1980 14:41-EDT From: Howard I. Cannon To: KMP at MIT-MC, JAR at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGITP I also agree with this proposal.  Date: 2 October 1980 1407-EDT (Thursday) From: Guy.Steele at CMU-10A Re: DIGITP Regarding the KMP/JAR proposal: (1) I disagree that a separate function DIGIT-WEIGHT is needed; DIGITP can return the weight. I don't think confusion of novices is likely; what about MEMBER, for example? The only argument I can think of for a separate function DIGIT-WEIGHT is so that it can be declared a FIXNUM function in MacLISP; but its range is quite restricted, and the fixnum "interning" mechanism will guarantee that no consing is done anyway. (2) Even if there were a DIGIT-WEIGHT function, it also ought to take an optional RADIX argument. (I argue for (DIGITP #/V 'ROMAN) => 5.) (3) The function DIGIT-NAME exists in CHPROP under the name DIGIT-CHAR. It too should take an optional radix argument, returning () if the conversion is not possible, e.g. (DIGIT-CHAR 12. 8) => (). (4) Minor typos: the function descriptions all think that every function is called DIGITP.  Date: 2 October 1980 15:31-EDT From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT Date: 2 October 1980 1407-EDT (Thursday) From: Guy.Steele at CMU-10A ... Regarding the KMP/JAR proposal: (1) I disagree that a separate function DIGIT-WEIGHT is needed; DIGITP can return the weight. I don't think confusion of novices is likely; what about MEMBER, for example? The T/() idea is free since there was already a way for people wanting a number to get it. MEMBER is a fine example, but there was no other existing way to get the thing MEMBER returns, whereas in our proposal there is. The only argument I can think of for a separate function DIGIT-WEIGHT is so that it can be declared a FIXNUM function in MacLISP; but its range is quite restricted, and the fixnum "interning" mechanism will guarantee that no consing is done anyway. Well, JAR said the same thing and I talked him out of it. The point is that DIGIT-WEIGHT (in its current one-arg form) is base-independent. ie, (DIGIT-WEIGHT #/A) => 10. regardless of base. So if you do (DIGITP #/A) you go through additional code which checks base. Indeed, you have to do (DEFUN DIGIT-WEIGHT (X) (DIGITP X 36.)) which is not only slower but amazingly ugly because it has this random constant 36. is in there. (2) Even if there were a DIGIT-WEIGHT function, it also ought to take an optional RADIX argument. (I argue for (DIGITP #/V 'ROMAN) => 5.) We did have a hairier proposal which did this, but we decided it was too hairy. I hope you don't take offense but I think ROMAN is (1) cute and (2) a crock. Make a PRINT-WITH-ROMAN and/or READ-WITH-ROMAN function if you want. Include it with the core Lisp system if you really want (though I think it's unwarranted) but I don't think it deserves a place in the character proposal unless you make it more general (eg, maybe if ROMAN were bound to some syntaxtable and you could do (DIGITP #/V ROMAN) ...). I'm not even close to convinced though. (3) The function DIGIT-NAME exists in CHPROP under the name DIGIT-CHAR... (4) Minor typos: ... Oops. Ok.  Date: 2 October 1980 15:36-EDT From: Jon L White Re: DIGITP's second arg, and error conditions In the following note: Date: 2 October 1980 00:17-EDT From: KMP,JAR at MIT-MC . . . (DIGITP char &optional radix) If no radix is supplied, DIGITP returns true (T or #T, as appropriate) for the characters 0 through 9, and () for other characters. . . . If radix is supplied, DIGITP returns () if the character is not a digit in the input base specified by radix. If the character is a digit in that radix, then its weight as a digit is returned. both DIGIT-WEIGHT and DIGIT-NAME are accepted, so I might suggest that DIGITP act as per my note of 1 October (reproduced below). But I didn't see any consideration in this proposal of the question of when to signal a :WRONG-TYPE-ARGUMENT error; would you accept my proposal for that? Date: 1 October 1980 20:25-EDT From: Jon L White . . . Should either (or both) signal an error when the argument is not implicitly a character? I think perhaps DIGIT-WEIGHT should; but DIGITP could, like FIXNUMP, be a total function. Now, as for "DIGIT-NAME", note GLS'S reply that CHPROP already suggested "DIGIT-CHAR"; furthermore, there is the question of how the result of DIGIT-CHAR should be presented, namely which of the various representations for characters should be used. There is the least confusion, I think, if its result is like the output of the LISPM function CHARACTER --- namely a fixnum. Now back to DIGITP's second arg: Date: 1 October 1980 20:25-EDT From: Jon L White . . . DIGITP -- returns only "True" or "False" (however encoded) and is "True" only for the 10 decimal digits. So why even bother with a second arg to DIGITP if DIGIT-WEIGHT is added? In fact, the reason that the whole DIGITP question came up at all was that RWK boggled when he saw DIGITP returning a useful value (instead of "True" or "False"); at that time, I doubt that anyone other than myself had ever used that "useful" value, and that all other users presumed the Webster's dictionary definition, i.e. one of the ten digits 0, 1, ... 9. But if DIGITP is to admit a second arg (and as KMP suggests it might make for more concise code if it did), then it would be more consistent still to have it return "True" or "False" rather than the DIGIT-WEIGHT. Similarly, I like KMP's suggestion that DIGIT-WEIGHT admit an optional second arg (defaulty 36.) for radix; it still isn't a fixnum function (in the MacLISP sense) since it could return () on non-digital characters.  Date: 2 October 1980 1607-EDT (Thursday) From: Guy.Steele at CMU-10A Re: DIGITP and the hair in ROMAN Regarding KMP's response to my response to...: (a) The reasoning why DIGITP need not return the weight seems to be circular. Perhaps I am misunderstanding the intention, but DIGIT-WEIGHT seemed to be introduced solely so that DIGITP need not return a weight--and the eexistence of DIGIT-WEIGHT is then said to justify DIGITP's not having to return a weight! (I believe CHPROP originally called for DIGITP to return the weight. I'm willing to listen to other arguments for two separate functions, but I haven't seen a good one yet.) (b) I am not advocating that ROMAN, or any other weird base, be part of any character standard or any standard LISP. It is just that if the functions are carefully defined to take an optional radix argument, then it will allow for extensions in the future. In particular, I want to allow the possibility that the character codes for digits may vary with radix or even with font.  Date: 2 October 1980 1612-EDT (Thursday) From: Guy.Steele at CMU-10A Re: JONL's recenbt note on DIGITP If one accepts that (a) DIGITP should take a second arg, and (b) the hypothetical DIGIT-WEIGHT should take a second arg, then it follows that (c) (DIGITP ...) = (NOT (NOT (DIGIT-WEIGHT ...))), which seems redundant. So if there are strong arguments *against* letting either of those functions take an optional second arg, then they should be distinct functions with distinct names; but otherwise it seems wasteful to have more than one function.  Date: 2 October 1980 17:27-EDT From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Why DIGITP and DIGIT-WEIGHT? (APPEND A ()) ; Obscure use of APPEND to copy top of list (SUBST () () A) ; Obscure use of SUBST to copy structure (LSH -1 -1) ; Obscure use of -1 & LSH to get max pos fixnum (SSTATUS MACRO (PROGN I) ...) ; Obscure use of PROGN to make I EVAL ... These are ugly idioms. They each have their particular application -- some more questionable than others, but in each case, the end result is in no way related to the name. APPEND should put things together. SUBST should do substitution. -1 should be negative 1 -- we shouldn't have to use it for its representation. PROGN should sequence -- it shouldn't be used to make a list out of something that wouldn't normally be ... Having to say (DIGITP #/8) to get a digit's weight is annoying. It looks like a predicate. It may be a predicate -- sometimes. If I did: ... (WHILE (SETQ C (DIGITP (TYI))) ...) ... I might want DIGITP instead of DIGIT-WEIGHT but if I were in a routine which I knew had verified its input to be digits, I would want to say ... (LET ((W (DIGIT-WEIGHT DIGIT))) ... rather than ... (LET ((W (DIGITP DIGIT))) ... Indeed, I may have been passed the DIGIT from some other routine and not know what his input radix was. I may only know that the caller wanted it treated as a digit. Then the default argument must be explicitly given (as 36. -- sigh) and I will need ... (LET ((W (DIGITP DIGIT 36.))) ... which is even dumber if all I want is the digit weight. I'd also like to think that the code I was writing would work fine if there were bases above 36. Why should that constant have to go in. Having two functions means I can say the functionality I want and get it straightforwardly. The issues are subtle when there is only one function and they are straightforward if there are two. I vote for the straightforward case. The code will be there anyway; why not give the user the entry point he wants? -kmp  Date: 2 October 1980 17:59-EDT From: George J. Carrette Re: DIGITP and DIGIT-WEIGHT Isn't the problem here that lisp as we think of it has the strange property of non-null truthity? We get efficiency in runtime and codesize by having predicates return useful values. However, most of our control structures still don't support this fully (although many suggestions and private macros abound). I think the elegance of a return value of a predicate is wiped out if I then see people doing: (DEFUN FOO ( .... &AUX TEMP) (CASEQ (SETQ TEMP (FOOBARP X)) ) ) This is especialy gross if because of compiler deficiency the programmer re-uses (i.e. SETQ's), the variable TEMP in either code sections 1 or 2. Having a uniform set of predicates doesn't solve the whole efficiency/modularity/elegance problem if its not backed up by good control structures. Another perhaps parallel way of aproaching this is to give predicates very special consideration in the compiling. When compiling "(IF (FOOBARP X) ...)" you might try "opening up" the routine FOOBARP if it is suggested by the properties of FOOBARP. This is not very novel and will only win (be practical) under certain conditions, however it does allow you to have your cake (general predicates) and it eat too (efficiency), in many cases. This DIGITP guy seems to cry out for a macro expansion generalization which I heard about first from ALAN. If the macro DIGITP gets a second argument of VALUE, EFFECT, or PREDICATE. Then it can do nice things, (IF (DIGITP X) ...) => (IF (MEMBER X '(#/0 #/1 ... #/9)) ...) Which in turn opens up to a LOOP construct with two exit forms, one for the IF and one for the THEN conditions. "Compilation" of a predicate (either by machine or by hand), then becomes one of generating the usual code for returning NIL or a value, and also generating optimizers and predicates to look for possible optimizations. Automaticaly getting feedback from user code (which all passes through the compiler), is then probably more important than any amount of arguing tivialities like DIGITP/DIGIT-WEIGHT. -gjc p.s. I wish ALAN's second argument to macros was implemented as soon as possible by all reasonable lisps. It would give us more to play with and less to argue about. Think about (IF (DIGITP X) ) going to (DO ((J #/0 (1+ J))) ((> J #/9) ) (AND (= X J) (RETURN )))  Date: 2 October 1980 16:52-EDT From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI To: Guy.Steele at CMU-10A, lisp-forum at MIT-MC Re: DIGITP Making DIGIT-WEIGHT or DIGITP return the right fixnums for non-fixnum bases such as ROMAN seems silly; 95% of the code that uses these this function (whichever it is) will be assuming that it can read a digit, multiply by the radix, add the next, etc.  Date: 2 October 1980 18:35-EDT From: Glenn S. Burke Re: Why DIGITP and DIGIT-WEIGHT? I would like to be able to say (DIGITP ch) and not lose efficiency over doing (LESSP #.(1- #/0) ch #.(1+ #/9)) (assuming "primitive" maclisp). For that reason, i think it is appropriate for DIGITP to barf if its arg is not a "character". For the same reason, it is reasonable for DIGIT-WEIGHT to admit a second argument. One could argue that the optional second arguments to the two functions should default similarly, but i don't think that is necessarily reasonable given the differing meanings. But the whole point, at least in my view, is that there should be some routines available so i don't have to know the magic constants in converting digits <-> integers (and alphabetic case etc.), assuming even that there are any. One could envision the sequence (char in TT) ANDI TT,137 CAILE TT,31 SUBI TT,47 SUBI TT,20 as an open-coding for (DIGIT-WEIGHT c), allowing a radix of 36. If it is a predicate also you tend to lose the ability to do things like this.  Date: 2 October 1980 23:37-EDT From: Carl W. Hoffman Re: DIGITP Having DIGITP and DIGIT-WEIGHT as two separate functions is desirable even ignoring compilation issues. I would find code which used DIGITP for other than its boolean value confusing. If the digit weight is desired, the code should say so. Recycling a name doesn't save anything.  Date: 3 October 1980 00:54-EDT From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI Re: DIGITP I agree with CWH and KMP. Saving a name at the cost of simplicity and clarity is not approrpriate for today's tradeoffs.  Date: 3 October 1980 1113-EDT (Friday) From: Guy.Steele at CMU-10A Re: Ooops, forgot to send this to LISP-FORUM - - - - Begin forwarded message - - - - Date: 3 October 1980 1112-EDT (Friday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) Subject: Re: Why DIGITP and DIGIT-WEIGHT? Well, first of all you make the implicit assumption that the weight of a digit is independent of the radix. That may be true in most practical applications, but I would like to avoid it if possible, for the sake of (perhaps imaginary) future flexibility. Second, the most common case is in fact to test for digitness and then, if it is a digit, do something with the weight. Both routines do about the same thing, after all--the motivation for having only one is similar (though on a smaller scale) to the desire for a DIVIDE function (say--why haven't we got one??) that returns both quotient and remainder, because doing it twice can be expensive on bignums. I do agree with your arguments about DIGITP being used "out of the blue" just to get the weight without testing. I might argue that for robustness one ought to check the result to make sure it's not (), but then again, I suppose I would be happy to have only a single function, but have it called DIGIT-WEIGHT rather than DIGITP. If you don't like the looks of (COND ((DIGIT-WEIGHT ...) ...) ...), then how about CHAR-DIGIT, by symmetry with DIGIT-CHAR? - - - - End forwarded message  Date: 3 October 1980 1124-EDT (Friday) From: Guy.Steele at CMU-10A To: GJC at MIT-MC (George J. Carrette) cc: lisp-forum at MIT-MC Re: DIGITP and DIGIT-WEIGHT GJC's note about control constructs and macros really hit home. I have, for some time, advocated that SOME way be found to grab the value of a COND predicate neatly. I've experimented in SCHEME with things like (COND ((DIGITP X) => (LAMBDA (WEIGHT) (HACK WEIGHT 43))) ... and also thought of simply that a symbol after a predicate and before anbother expression should get bound to the value: (COND ((DIGITP X) WEIGHT (HACK WEIGHT 43)) buit neither of these is very LISPy. A similar idea would happen to work for CASEQ: (CASEQ (DIGITP X) WEIGHT ((0 1 2) (HACK WEIGHT)) ((3 4) (GROSS-OUT WEIGHT)) ... But I'm not satisfied with these, and I bet no one else is, either. The idea of an extra argument to macros is neat. It would require EVAL to keep track of these contexts, however--EVAL would probably need the context as an extra argument also. Maybe functions could get it, too! It's hard to know where to stop. But this striked me as a quick explicit patch [strikes], where one really wants is an optimizing compiler as GJC described that would take care of these thinsg for you. (For one thing, the macro expansion shown by GJC for an IF and DO loop couldn't be done by a simple macro--you'd need continuations.)  Date: 3 October 1980 1152-EDT (Friday) From: Guy.Steele at CMU-10A Re: DIGITP Okay, okay, I give up! I agree that having two functions DIGITP and DIGIT-WEIGHT is all right, and leave it to a very smart compiler to optimize the case of (COND ((DIGITP X) (HACK (DIGIT-WEIGHT X))) to avoid duplicate work if it thinks it is worthwhile, which it probably isn't anyway, I guess. I propose that DIGITP nevertheless return the weight of the digit if the argument is in fact a digit, and that DIGIT-WEIGHT blithely assume that it not only has a character in hand, but in fact a digit, so that in the case of the above COND a compiler which open-codes would know that (a) the call to DIGITP actually is "for predicate" and so need not generate the weight, and (b) the call to DIGIT-WEIGHT may be open-coded as GSB suggested.  Date: 7 October 1980 04:26-EDT From: Robert W. Kerns Re: DIGIT-WEIGHT/DIGITP and funny input bases Actually, I think this all is the wrong division of labor. I really thing what we want is DIGITP and INPUT-FIXNUM. DIGITP simply tells you whether a character is a digit in the current or supplied base, and INPUT-FIXNUM, takes a first argument of a SEQUENCE of CHARACTERS, and an optional second argument like DIGITP. I can't think of any reason why anybody would want to know the digit weight of a single digit (other than to implement INPUT-FIXNUM), but of course you can always ask INPUT-FIXNUM to do the conversion. On a more general level, would take a sequence of characters and an optional base. INPUT-NUMBER would return the number that the sequence of characters represents, or () if it is not a valid number representation. INPUT-NUMBER would handle any valid LISP representation for FLONUM's, FIXNUMs, including such funny things as 1_5 and such if they're still in the language, ROMAN if your LISP has ROMAN, CHINEESE if you've got that. A parser would collect digits and pass them off to INPUT-FIXNUM when it got them all. I don't see any reason right off why anybody would really want to deal with the individual weights of individual characters. Does anybody have any examples where you would?  Date: 7 October 1980 04:53-EDT From: George J. Carrette To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases The wrong division of labor? Who would want to write a parser anyway? Who is going to implement a reader? I don't think its honest to encourage users to implement "un-lispy" syntax via a set of lisp primitives. Think of the people who lost because they hacked the maclisp readtable, and now can't run on the lisp machine. (e.g. CGOL). Why should we pretend to provide general faciluties for that sort for thing when we know that it is highly non-optimal, and brings people away from powerfull extensions to the language? (extensions to the PRINTER and GRINDER and DESCRIBER through various stucture mechanisms, and to the EVALUATOR through functions and macros). Now, if you give 'em something like the finite-state-machine compiler and interpreter as a user-package, then I think they will be winning so much they won't even remember what life back on the farm was like. Its a sufficiently powefull thing non-trivialy unlike anything else in the system. -gjc  Date: 7 October 1980 05:26-EDT From: Robert W. Kerns To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases Hey, I wasn't saying that we want to encourage people to hack funny input bases and weird Pratt-like syntaxes. Rather, considering funny input bases led me to consider the choice of functions, and how they are used. If you're not doing some parser-like activity, why are you checking if something is a digit? It may be a totally trivial parse, like gobbling line numbers off the beginning of a line of some data file, or parsing the output of some MACSYMA program. All the examples I've seen people come up with for use of DIGITP have been imbedded in the context of 'OK, now I've got this digit, now I incrementally interpret it as a number'.  Date: 7 October 1980 14:04-EDT From: Kent M. Pitman To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases Since DIGIT-WEIGHT is trivially definable as a variant on INPUT-FIXNUM -- eg, (DIGIT-WEIGHT (X &OPTIONAL BASE) (DECLARE (CHARACTER X)) (INPUT-FIXNUM (COERCE-CHARACTER-TO-STRING X) BASE)) or something like that -- this might make sense. You're certainly not losing anything functional. I'd suggest a bit of caution here, though. The trick to designing a language is to make sure that you have all the things defined primitively that you can't do without AND to make sure that you provide enough useful/common (albeit not dictated by necessity) constructs that people don't end up having to program in assembly language... I think you should distinguish -- at some point at least -- between things you want in the language and things you want in the programming environment. After all -- to parallel your argument and use some of the motivations for the other Lisp-based AI languages -- no one really wants to say CAR and CDR anyway. They want to solve problems with goals, etc. But that's no justification for not providing CAR and CDR. If you are going to provide some high-level handler that's ok, but don't shortchange the people asking for primitives. INPUT-FIXNUM is not a primitive notion in any way I can understand. Think of DIGIT-WEIGHT/INPUT-FIXNUM as paired like DEFAULTF/LOAD in Maclisp. If I didn't like LOAD, I could write my own -- and it would be able to use the normal Lisp file defaults. If I hid the Lisp defaults away in a place they couldn't be seen, I'd end up writing something which was not only functionally different but which might default wrong. I'd have to guess its file defaulting algorithm. Similarly, if I don't like INPUT-FIXNUM and want to write my own, I might want still to use the same function that it calls to get digit weights. That's a fully conceptually separable piece of code which you're already having to write. Why not give the user an entry into it rather than forcing him to write his own ... I don't have a strong stand about this -- I'm just pointing out that if you want to open this can of worms (INPUT-FIXNUM), you're not considering all the aspects of the issue ... I'd like to hear more discussion from some of the others ... -kmp  Date: 7 October 1980 16:08-EDT From: Glenn S. Burke Re: DIGIT-WEIGHT/DIGITP and funny input bases Date: 7 October 1980 04:26-EDT From: Robert W. Kerns A parser would collect digits and pass them off to INPUT-FIXNUM when it got them all. I don't see any reason right off why anybody would really want to deal with the individual weights of individual characters. Does anybody have any examples where you would? Ok, the next version of FORMAT will autoload optimal strings so it can "parse" the parameters.  Date: 7 October 1980 18:17-EDT From: Robert W. Kerns To: GSB at MIT-ML cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases Date: 7 October 1980 16:08-EDT From: Glenn S. Burke Date: 7 October 1980 04:26-EDT From: Robert W. Kerns A parser would collect digits and pass them off to INPUT-FIXNUM when it got them all. I don't see any reason right off why anybody would really want to deal with the individual weights of individual characters. Does anybody have any examples where you would? Ok, the next version of FORMAT will autoload optimal strings so it can "parse" the parameters. How is this an example? Those digits represent numbers, I claim you don't care at all about the weights of the individual characters, you care about the number that the sequence of digits represent!  Date: 7 October 1980 18:28-EDT From: Robert W. Kerns To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases I'm not opposed to the existance of DIGIT-WEIGHT. But the situation is not really analogous to CAR/CDR. CAR/CDR are used in a much less constrained manner than DIGIT-WEIGHT would be. I have yet to see any examples where it would be used at ALL other than to implement or duplicate INPUT-FIXNUM.  Date: 7 October 1980 18:56-EDT From: Glenn S. Burke To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases Date: 7 October 1980 18:17-EDT From: Robert W. Kerns Date: 7 October 1980 16:08-EDT From: Glenn S. Burke Date: 7 October 1980 04:26-EDT From: Robert W. Kerns Does anybody have any examples where you would? Ok, the next version of FORMAT will autoload optimal strings so it can "parse" the parameters. How is this an example? Those digits represent numbers, I claim you don't care at all about the weights of the individual characters, you care about the number that the sequence of digits represent! That is true. But what kind of interface is provided? I am at some point in a string, and the next character(s) may be digits, and if so i want the value. Will you provide a routine which will then take a string, the input radix, and a starting index, and return TWO values (the number or NIL if none, and the new position in the string)? And a mess of other flags to tell it that i don't want shifted, exponential, or flonum formats recognized? If i had to find the end of the "string" of digits, then i have to iterate down the string, and i might as well just collect the number at the same time. I'm not saying that INPUT-FIXNUM is not called for, but that in places where one is not augmenting or supporting the full glory of Lisp's number syntax DIGIT-WEIGHT may be called for, and INPUT-FIXNUM (or hairily called variants) would be cumbersome to use and overkill.  Date: 8 October 1980 02:12-EDT From: Barry Margolin To: KMP at MIT-MC, RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DIGIT-WEIGHT/DIGITP and funny input bases Kent has a point. Supposing I were to come up with a funny base that I wanted to input my numbers in. I would prefer to just have to rewrite the DIGIT=WEIGHT function to accept a tag for this base, and then my code (which might be or might duplicate the functionality of-INPUT FIXNUMS) would work in this new base. This type of inormation about turning characters into numbers is best left at a low level. -Barmar  Date: 8 OCT 1980 0246-EDT From: RMS at MIT-AI (Richard M. Stallman) DIGIT-WEIGHT is not worth all the arguments going on about it.  Date: 8 October 1980 1053-EDT (Wednesday) From: Quux at CMU-10A Sender: Guy.Steele at CMU-10A Re: DIGIT-WEIGHT etc. From: Robert W. Kerns I'm not opposed to the existance of DIGIT-WEIGHT. But the situation is not really analogous to CAR/CDR. CAR/CDR are used in a much less constrained manner than DIGIT-WEIGHT would be. ... Great! Let's allow CAR/CDR to be truly unconstrained. CAR of a character can be its digit weight, and maybe CDR can upper-case it.  Date: 9 October 1980 1024-EDT (Thursday) From: Quux at CMU-10A Sender: Guy.Steele at CMU-10A To: RWK at MIT-MC cc: lisp-forum at MIT-MC From: Robert W. Kerns From: Quux From: Robert W. Kerns I'm not opposed to the existance of DIGIT-WEIGHT. But the situation is not really analogous to CAR/CDR. CAR/CDR are used in a much less constrained manner than DIGIT-WEIGHT would be. ... Great! Let's allow CAR/CDR to be truly unconstrained. CAR of a character can be its digit weight, and maybe CDR can upper-case it. Huh? Are you trying to make some point about what I said, or is this merely unrelated humor? I can neither find the point nor detect the humor. The humor is there only if you cannot see it. Inspect the joke under a microscope. Use sonar. Try a CAT scan. What is the sound of one Quux clapping, in the forest with no one to hear? Actually, what I propose wouldn't be too hard to arrange on a VAX. Let "characters" not be of pointer type, but put interesting information at the VAX locations which they happen to address. It might then be very easy to let CAR of a character be its digit-weight... VDate: 20 March 1981 2257-EST (Friday) From: Guy.Steele at CMU-10A Re: Proposed extension to DOLIST, DOTIMES One is permitted to use RETURN within DOLIST and DOTIMES, and this is very useful for aborting the iteration. It would be more useful if one could reliably return a value to look at. This can be done with RETURN, but the problem is that there is no way to specify the value if the iteration finishes successfully. Hence I propose that (DOLIST (var list retval) . body) be just like DOLIST, except that if and when all iterations are completed, the form "retval" is evaluated and its value becomes the result of the DOLIST (similarly for DOTIMES). If retval is omitted, () is used (I believe that is what happens now anyway). (defun first-word-in-string (str) ;spaces separate words (dotimes (j (string-length str) str) (when (char= #/space (char j str)) (return (substring str 0 j))))) This returns the first "word" in a given string. If the string has no spaces, then the string itself is returned. If one wants both the word and the index of the space, or () if no space was found: (defun first-word-in-string (str) (dotimes (j (string-length str) (values str ())) (when (char= #/space (char j str)) (return (substring str 0 j) j)))) --Guy  Date: 20 March 1981 23:04-EST From: Robert W. Kerns Re: Return value from DOLIST, DOTIMES I think GLS's idea is far more useful than NIL's (unnounced, undiscussed, added-by-person-who-wrote-the-code) feature of DOLIST taking a "count" argument in that position.  Date: 20 MAR 1981 2305-EST From: Moon at MIT-AI (David A. Moon) To: lisp-forum at MIT-MC cc: Guy.Steele at CMU-10A Re: Proposed extension to DOLIST, DOTIMES Date: 20 March 1981 2257-EST (Friday) From: Guy.Steele at CMU-10A .... I propose that (DOLIST (var list retval) . body) be just like DOLIST, except that if and when all iterations are completed, the form "retval" is evaluated and its value becomes the result of the DOLIST (similarly for DOTIMES). NIL is already using the syntax of three elements of the first subform of DOLIST, I believe. Once you start getting into crocks like this, isn't it better to use something with more syntax to it, such as LOOP? We have now seen three entirely different, and entirely reasonable, proposals for what to do with the third element of the first subform of DOLIST, which makes it clear that whichever one we picked no one would be able to remember which it was.  Date: 22 March 1981 00:44-EST From: Daniel L. Weinreb To: Guy.Steele at CMU-10A, lisp-forum at MIT-MC Re: Proposed extension to DOLIST, DOTIMES While I do think that this is the best proposal for extending these things that I have heard, I am still opposed to any extension to these things, partially because I hate the NIL extension and don't want to be gratuitously incompatible, but mostly because they are intended for SIMPLE cases and we have plenty of forms for dealing with hairy cases. Adding random features to things like these is silly; while yours is by far the least random suggestion I have heard, I am still marginally against it.  Date: 23 March 1981 17:46-EST From: Jon L White To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: DOLIST, with index variable Date: 20 March 1981 23:04-EST From: Robert W. Kerns I think GLS's idea is far more useful than NIL's (unnounced, undiscussed, added-by-person-who-wrote-the-code) feature of DOLIST taking a "count" argument in that position. DOLIST has never been specifically part of the NIL design, but NIL would logically inherit it from MacLISP. There was a version of DOLIST in MacLISP's UMLMAC file, which was inserted by someone (?? who?) between the LISP RECENT note of FEB 07,1980 and Oct 1980. So I documented what I found in UMLMAC, in the LISP RECENT note of Oct 28,1980, and proceeded to use it; I suspect other people have used it too, for it is an extremely common case to want to map down a list knowing the index of the element your are working on. So common, in fact, that if it has to be retroactively excised from MacLISP/NIL for compatibility with whomever, then we'll simply have to invent some other name for it (e.g., DOLISTIMES !) As MOON has pointed out, there are already three entirely reasonable, unfortunately incompatible, proposed extensions for DOLIST; so no matter what standard could be chosen for DOLIST now, it would probably inconvenience someone. eDate: 19 September 1980 18:00-EDT From: Kent M. Pitman Re: #\EOF I asked BUG-LISP and got no objection, but I'll ask the whole community just in case now that this mailing list exists: Would anyone object to having a #\EOF which returned an end of file (eg, -1 in Maclisp). -kmp  Date: 19 September 1980 21:32-EDT From: David A. Moon To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: #\EOF What's the point? TYI is defined to return -1 at EOF. This is part of the language, it is not an operating system dependency and should not be expected to vary across systems. (In fact, ITS returns 777777000003 at EOF, not -1). Furthermore, TYI is the only function that returns -1 at EOF. READ and READLINE take an argument which is what to return (and it doesn't default to -1!) and IN doesn't return anything, but gives an error.  Date: 20 September 1980 11:44-EDT From: George J. Carrette To: MOON at MIT-MC cc: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: #\EOF I have found it useful when hacking various output devices with ascii or otherwise protocalls (SANDERS printer, ARDS and TEK graphics), to define my own sharp-sign-back-slash tokens. (+TYO #\ARDS-LONG-MODE STREAM) and that sort of thing. Provide a standard function for doing that, and if KMP wants to write (= #\EOF (TYIPEEK)) then I don't mind.  Date: 20 September 1980 14:16-EDT From: Kent M. Pitman To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC, GJC at MIT-MC Re: #\EOF Date: 19 September 1980 21:32-EDT From: David A. Moon ... TYI is defined to return -1 at EOF. This is part of the language, it is not an operating system dependency and should not be expected to vary across systems. (In fact, ITS returns 777777000003 at EOF, not -1). That's not true. The Maclisp TYI function runs the EOFFN if it exists, but if no EOFFN exists, it returns its second arg (if no 2nd arg and no eoffn, then an error). Eg, (TYO stream 3) is going to return 3 on end of file. Since -1 is a good end of file value, #\EOF would be a good equivalent. (TYIPEEK () stream n) should also return n on EOF but fails to due to a Maclisp bug which leaves it returning -1 always. In any case, the -1 makes very little sense conceptually while the #\EOF reminds the reader of its intent. Furthermore, TYI is the only function that returns -1 at EOF. READ and READLINE take an argument which is what to return (and it doesn't default to -1!) and IN doesn't return anything, but gives an error. I am not swayed by this argument any more than I would be swayed by an argument that #\TAB is a meaningless thing to have around because READLINE could never return it. In fact, (READLINE stream #\EOF) would be perfectly well-defined. So would (READ stream #\EOF) although the result of such a read would be a bit ambiguous. The fact that IN cannot take such an arg is sad, but I don't know what to say. Perhaps IN should take an optional second arg in case the stuff being IN'd is sufficiently constrained to make it useful to say (IN stream n). The particular semantics of IN/OUT, READ, TYI, etc are not really in question, however. The point is that I have to specify a number. In some system where negative numbers meant something to TYI, my code would break. I admit that's a case *not* worth worrying about, but it still happens that I would like symbolic type-in modes (ie #\ or #/ things) for all character primitives and -1 is something TYI could return. -kmp  Date: 20 September 1980 23:10-EDT From: David A. Moon To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC, GJC at MIT-MC Re: #\EOF (look before you leap) :copy tty:,dsk:. . ^C :lisp n (setq foo (open '|. .|)) (tyi foo) ==> -1, not an error.  Date: 21 SEP 1980 0135-EDT From: DLW at MIT-AI (Daniel L. Weinreb) To: GJC at MIT-MC cc: LISP-FORUM at MIT-AI Re: #\EOF Why do you need your own #\ constructs when you have a read-time-eval feature? Come now, must EVERY feature in the entire system have a hook into it? I cannot belive this enhances readability.  Date: 21 September 1980 13:51-EDT From: Kent M. Pitman To: DLW at MIT-AI cc: GJC at MIT-MC, JAR at MIT-MC, LISP-FORUM at MIT-MC Re: #\EOF The reason is that EOF must be a value which is not a possible character to be useful. Since the space of characters is system-dependent, then so is the complementary space of not-possible characters. If I have to do (EVAL-WHEN (EVAL COMPILE) (SETQ EOF -1.)) .... code involving #.EOF ... instead of doing #\EOF where that value is defined by the system, then I might as well just reference -1 and be done with it... and I might as well have written 65. as #/A ... I once proposed that a # thing be made to get funny symbols in addition to fixnum reps of those symbols and people said why don't I use #.(ASCII #\...). I agree that that case was a convenience issue, but I don't agree that #\EOF is of the same class. Doesn't *anyone* else see that unless the system is complete, it doesn't really buy you what it pretends to? JAR or GLS -- I would be interested in your comments. -kmp  Date: 23 SEP 1980 2340-EDT From: RMS at MIT-AI (Richard M. Stallman) Re: eof It does improve program readability to use something symbolic instead of a -1. However, there is no reason why this has to require any read-time syntax. Just use a symbol whose value is -1. The compiler could even allow a declaration that the symbol value would never change, for systems where that would lead to increased efficiency. We don't need to have EVERY constant value in the world have a read-time syntax when there are variables. This may apply to #\TAB as well. Instead of a new declaration, we could make DEFCONST values available at compile time like macros, and then suggest that users use #,. Alternatively, a reasonable idea is that #\ should be used for ALL values in a program which are to be considered constant by the compiler -- for #\ to be the way to ask for a value to be taken as constant -- and then provide a DEFsomething to define them. A uniorm convention is important. We might want to replace the #\TAB names with something that fits a convention that the user can use as well, if the convetion isn't using #\. However, where EOF is concerned, the best code is not a comparison with -1, but a sign test. In other words, not (= -1 (TYIPEEK)) but (MINUSP (TYIPEEK)). This is true on the Lisp machine and in Maclisp. If you want to use code which says "eof" in it for clarity, it should expand into the latter. So what we need is (EOFVALP (TYIPEEK)) and let it be the same as MINUSP.  Date: 24 September 1980 0957-EDT (Wednesday) From: Guy.Steele at CMU-10A To: RMS at MIT-AI (Richard M. Stallman) cc: lisp-forum at MIT-MC Re: eof On the whole, I agree with your remarks concerning #\EOF and #\ versus #,. The original CHPROP had in fact suggested using #,TAB etc. One disadvantage of this, however, is that the standard names of characters tend to be short (TAB, CR, etc.), and there might be some trouble if one were chosen by accident as a user variable. This is always a problem, of course; what I would really like is a way to have DEFCONST or equivalent "lock" a value cell so that it couldn't be bound or setq'd. But I digress. One advantage of #\ for characters in particular is that they are easily recognized as character values--and there is a pleasant symmetry with #/. Also, if #, were to be used then would it generalize to #,TAB for Control-TAB? If so, then a suitable choice of variable names would preserve recognizability: #,\TAB !  Date: 25 September 1980 23:00-EDT From: Kent M. Pitman Re: #\EOF ... Yes, again. Well, I am somewhat encouraged that I spoke with DLW today offline and he thought he might be swayed so I'll lay this out clearly again once more, squashing the arguments I have heard: Argument: #\ should return only representations of characters. Answer: So should TYI and TYIPEEK. Over time, however, we have found it useful, especially when dealing with compiler optimizations related to fixpdls, to return an eof designator, -1. If it's good enough for TYI and TYIPEEK, it's good enough for me. Argument: All TYI's and TYIPEEK's return -1 at eof; why not just use that? Answer: (1) All TYI's and TYIPEEK's return #o101 for "A" being typed, but #/A was thought to be more readable. (2) If people change the representation of characters, as the LispM people have found it useful to do, negative numbers might want to acquire an interpretation as characters. Do you want to break code that does (TYI STREAM -1) and which will then perhaps perceive a premature end of file. (3) No manual for any existing Lisp documents that -1 is not a part of the possible character set. Nor does any manual document that TYI returns -1 at end of file. The fact that it just happens to annoys me. I want a documented standard. Argument: Why not do (SETQ EOF -1) and reference #.EOF instead of #\EOF. Answer: (1) #.EOF doesn't supply visually as much info as #\EOF. (2) It's maybe now easy but unless conventions are adopted, I don't see any reason why it won't degenerate, over the years, into: (SETQ EOF #+LISPM SI:EOF-REPRESENTATION #+NIL *:EOF-FIXNUM #+MACLISP -1) or something silly like that. Saying DEFCONST instead of SETQ doesn't make it any less awful. The annoying part is having system dependencies at all. The bottom line: It is annoying to have this supposed abstraction which frees me of the `representation' specifics of my characters. If I type #/A, I should have to know only that it is going to return a fixnum which is the same fixnum as will be returned by TYI'ing an "A". Similarly, I need to be able to say #\EOF and know that this is the same fixnum, whatever it is, that will be returned by TYI'ing an eof. If you don't grant this, then you require me to always know *some* information about the mapping from characters to fixnums. I have to know the shape of the set of valid characters so I can choose a character that's not in it. I claim that if you have an abstraction which is 98% complete, then you haven't really bought yourself anything. Going this extra 2% will buy you an elegant bit of insulation from your character set and move you closer to the goal of robust code that is not bothered by trivial implementational details. -kmp  Date: 25 Sep 1980 21:12:51-PDT From: CSVAX.jkf at Berkeley To: KMP at MIT-MC cc: lisp-forum at mit-mc Re: #\EOF ... Yes, again. ... Nor does any manual document that TYI returns -1 at end of file. The fact that it just happens to annoys me. I want a documented standard. The Franz Lisp manual documents that fact that tyi returns -1 on end of file. In anticipation of the eventual acceptance of #\EOF I have added it to our sharp sign macro. I also favor DEFCONST.  Date: 26 September 1980 00:14-EDT From: Daniel L. Weinreb Sender: dlw at CADR4 at MIT-AI Re: #\EOF ... Yes, again. Another point KMP and I discussed is that the manifest constant for end-of-file really ought to be a "#\" (rather than some random symbol used with #. or something) is that while the end-of-file value is not precisely a character, it is in the same name space as characters; it is one of the things TYI can return. It is as much a character as the Pascal nil or PL/1 null pointer is a pointer. If you were to switch character sets, and make all the things like "#/A" and "#\RETURN" have different values, you would also want to change the end-of-file value; they go hand in hand. The end-of-file value and the rest of the characters are in the same name space. Therefore, the right way to represent the end-of-file value is with #\EOF.  Date: 26 September 1980 01:40-EDT From: Alan Bawden To: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: #\EOF ... Yes, again. OK. lets have #\eof. Nobody has to use it, and it might make some compatability easier to achieve. But, it is in no way elegant: Date: 25 SEP 1980 2300-EDT From: KMP at MIT-MC (Kent M. Pitman) Argument: #\ should return only representations of characters. Answer: So should TYI and TYIPEEK. Over time, however, we have found it useful, especially when dealing with compiler optimizations related to fixpdls, to return an eof designator, -1. If it's good enough for TYI and TYIPEEK, it's good enough for me. A character is something that you can put in strings and pnames. You can TYO it. You can hold down the meta key and type it. #\eof fits none of these descriptions. So it must be that #\eof isn't a character. This makes it different from everything else that a #\ can be. Argument: All TYI's and TYIPEEK's return -1 at eof; why not just use that? Answer: (1) All TYI's and TYIPEEK's return #o101 for "A" being typed, but #/A was thought to be more readable. Do I believe what I read here? I thought that you wanted to have #/ and #\ return character objects. And I thought that you wanted to write code that workes on an EBCDIC machine? (2) If people change the representation of characters, as the LispM people have found it useful to do, negative numbers might want to acquire an interpretation as characters. Do you want to break code that does (TYI STREAM -1) and which will then perhaps perceive a premature end of file. Nobody will ever make this mistake (making -1 a character) and live. (3) No manual for any existing Lisp documents that -1 is not a part of the possible character set. Nor does any manual document that TYI returns -1 at end of file. The fact that it just happens to annoys me. I want a documented standard. Gee, It really isn't documented is it! (I haven't checked LISP NEWS yet, it is probably there, but that is a pretty poor place for documentation.) (I don't follow the logic here anyway.) Argument: Why not do (SETQ EOF -1) and reference #.EOF instead of #\EOF. Answer: (1) #.EOF doesn't supply visually as much info as #\EOF. Agreed. Also it seems pretty offensive to give the symbol EOF a value. (2) It's maybe now easy but unless conventions are adopted, I don't see any reason why it won't degenerate, over the years, into: (SETQ EOF #+LISPM SI:EOF-REPRESENTATION #+NIL *:EOF-FIXNUM #+MACLISP -1) or something silly like that. Saying DEFCONST instead of SETQ doesn't make it any less awful. The annoying part is having system dependencies at all. Well, no matter what, there is gonna hafta be something like this SOMEPLACE. I guess you must mean that people shouldn't be forced to write that at the beginning of their code. I agree. ... If you don't grant this, then you require me to always know *some* information about the mapping from characters to fixnums. I have to know the shape of the set of valid characters so I can choose a character that's not in it. I claim that if you have an abstraction which is 98% complete, then you haven't really bought yourself anything. ... Well, first let me remind you that I started this by saying that #\eof is ok by me. (caseq (tyi) (#\space ...) (#/A ...) (#\eof ...)) looks almost right (*). But: I also remind everybody that it is IMPOSSIBLE to be 100% character representation independent. (Remember #\lambda and #\backspace and #^H ?? They are all EQ in MacLisp so you had better be carefull not to use two of them in the same context!) So perhaps we move from 98% to 99%, you still have to have *some* information about the mapping no matter what. And you have bought yourself something (even at 98%!) (*) Of course, due to stubbornness on the part of the LispMachine people this code won't run. They refuse to bite the bullet and swallow caseq, insisting that I should use a selectq macro and lose any possible benefits from chomplr's better understanding of caseq. Then again, the MacLisp people could let caseq take fixnums and other objects mixed as keys, and then I could let selectq expand into caseq and not use caseq in my code. But as it stands I must use both and supply my own caseq for the LispMachine. *SIGH* compatability!  Date: 26 September 1980 12:25-EDT From: Daniel L. Weinreb Sender: dlw at CADR2 at MIT-AI To: ALAN at MIT-MC, LISP-FORUM at MIT-MC Re: #\EOF ... Yes, again. From: Alan Bawden A character is something that you can put in strings and pnames. You can TYO it. You can hold down the meta key and type it. #\eof fits none of these descriptions. So it must be that #\eof isn't a character. This makes it different from everything else that a #\ can be. This is what my mail yesterday was about. The enf-of-file indicator is not a character, but it is in the same name space as characters, and therefore it really IS elegant to use a \# construct. See yesterday's letter for details. Argument: All TYI's and TYIPEEK's return -1 at eof; why not just use that? Answer: (1) All TYI's and TYIPEEK's return #o101 for "A" being typed, but #/A was thought to be more readable. Do I believe what I read here? I thought that you wanted to have #/ and #\ return character objects. And I thought that you wanted to write code that workes on an EBCDIC machine? No, you miss his point; he is being sarcastic, saying that you should no more depend on the -1 than you should on the 101, even if all implementations now are the same, because there might be different ones in the future. Nobody will ever make this mistake (making -1 a character) and live. It doesn't matter; it is inelegant to depend on it. Golly gee, the Lisp Machine Manual really doesn't document the -1. What do you know...  Date: 26 September 1980 15:53-EDT From: Kent M. Pitman To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: #\EOF ... Yes, again. Date: 25 Sep 1980 21:11:58-PDT From: CSVAX.jkf at Berkeley To: KMP@MIT-MC cc: lisp-forum@mit-mc Re: #\EOF ... Yes, again. ... Nor does any manual document that TYI returns -1 at end of file. The fact that it just happens to annoys me. I want a documented standard. The Franz Lisp manual documents that fact that tyi returns -1 on end of file. In anticipation of the eventual acceptance of #\EOF I have added it to our sharp sign macro. I also favor DEFCONST. ----- Thanks for the support. -kmp  Date: 26 September 1980 23:08-EDT From: David A. Moon To: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: #\EOF ... Yes, again. What are you going to do about the fact that TYI returns -1 at EOF but other functions return other things? For example, READ returns what you tell it to and (on the Lisp machine) the :TYI stream operation returns NIL at EOF.  Date: 26 September 1980 23:17-EDT From: Kent M. Pitman To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Re: #\EOF Date: 26 September 1980 23:08-EDT From: David A. Moon Subject: #\EOF ... Yes, again. To: KMP at MIT-MC, LISP-FORUM at MIT-MC What are you going to do about the fact that TYI returns -1 at EOF but other functions return other things? For example, READ returns what you tell it to and (on the Lisp machine) the :TYI stream operation returns NIL at EOF. What are you going to do about the fact that TYI returns #o50 at (FOO) but other functions return other things? For example, READ returns (FOO) and * READLINE returns |(FOO)|. The :TYI operation is documented to return () iff the end-of-file value requested is not given or is (). I would like a fixnum value that I can feed as the end-of-file value which is not in the character set. I hope that :TYI will not force a () on me if I specify the 2nd arg to that operation. -kmp * = Sarcasm [labelled for those that miss the point sometimes]  Date: 27 September 1980 17:15-EDT From: David A. Moon To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: #\EOF Date: 26 SEP 1980 2317-EDT From: KMP at MIT-MC (Kent M. Pitman) ... The :TYI operation is documented to return () iff the end-of-file value requested is not given or is (). I would like a fixnum value that I can feed as the end-of-file value which is not in the character set. I hope that :TYI will not force a () on me if I specify the 2nd arg to that operation. I don't know what document you're reading, this is not what the Lisp machine manual which I would assume to be the definitive document on Lisp machine streams says. The problem with having a "character" called #\EOF is that not all sources of characters want to indicate EOF in the same way. If it wasn't for the unfortunate presence of data types in Maclisp, such that TYI must always return a fixnum, I would advocate making NIL the end of file indication in everything, since it is clearly not a character and is the standard indicator of emptiness. Under the circumstances I think it would be misleading and confusing to introduce a "#\EOF" syntax because it would only work with some and not all functions.  Date: 27 September 1980 13:35-EDT From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: MOON at MIT-MC, KMP at MIT-MC, LISP-FORUM at MIT-MC Re: "What are you going to do about..." Well, READ and the :TYI operation do something other than returning the end-of-file character at end of file. Nobody said every input function must return the end-of-file character when they get to the end; they could return NIL, signal a condition, or anything they want. That doesn't mean there shouldn't be and end-of-file character; it just means that Lisp input functions are not all the same in their handling of end-of-file, which is already true in Maclisp and can't be changed. qDate: 14 August 1980 17:43-EDT From: Alan Bawden Re: character objects & randomness Character objects are a loss. Characters are fixnums. You perform subtraction to upper case them. You add 7 to #/0 to generate #/7. You use > and < to determine if they are alphabetic. Would you change the MacLisp #/ to generate "character objects" (symbols)? That would be an amazing loss. If we are only talking about NIL, then I can live with it. On another topic: (eval-when (load) (eval-when (eval load compile) )) Should it evaluate at compile time? I can think of arguments both for and against.  Date: 14 August 1980 19:06-EDT From: Daniel L. Weinreb Re: eval-when (eval-when (load) (eval-when (eval load compile) )) Should it evaluate at compile time? I can think of arguments both for and against. If this evaluates at compile time, I would be very confused. As I think of it, the entire inner eval-when form should only be evaluated at load-time, and should be ignored at all other times. This should work just like nested ifs: (if (memq *time* '(load) (if (memq *time* '(eval load compile) ))  Date: 14 August 1980 22:40-EDT From: Alan Bawden To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC, Greenberg.Multics at MIT-MULTICS Re: eval-when Date: 14 August 1980 19:06-EDT From: Daniel L. Weinreb (eval-when (load) (eval-when (eval load compile) )) Should it evaluate at compile time? If this evaluates at compile time, I would be very confused. Well, COMPLR currently DOES evaluate at compile time, and my first reaction was that this is a bug. It seemed, at first, that the right thing was for to be "evaluted" at those times that were in the intersection of the times specified by all surrounding eval-whens. But consider: (eval-when (compile) (eval-when (eval) )) When COMPLR sees the outer eval-when, it calls EVAL to process the rest of the stuff. Then EVAL sees the inner eval-when and, sure enough, it's now EVAL-time so gets evaluated! This has nothing to do with the intersection of anything, and it seems to be the correct behavior, since the name of the thing is "EVAL-when"! (It also does the correct thing with respect to macrology that might expand into an (eval-when ...). The macrology wants to know if the stuff is going to be run interpreted or compiled, and the surrounding (eval-when (compile) ...) is there to cause the stuff to be run interpreted (in the compiler).) The problem with (eval-when (load) ...) is that it in effect says (to the compiler): "continue to process the stuff in here, just as you would normally". Where what you sometimes want to do is turn off inner (eval-when (compile) ...)s. For example, suppose you had a macro that expanded into a bunch of putprops all wrapped up in a (eval-when (eval compile load) ...). If, for some reason, you decided that in just this one case you wanted different properties at compile and load time, then you might call the macro in two different places, once surrounded by an (eval-when (compile) ...) and once surrounded by an (eval-when (load) ...). But if you write them in that order, the wrong thing will happen, because the putprops in the (eval-when (load) ...) will happen at compile time anyway! Oh well, I guess that in the days when we did all this stuff with (declare (eval (read))) and such hacks, we could never have even been able to concieved of such subtleties!  Date: 14 August 1980 23:19-EDT From: Daniel L. Weinreb To: ALAN at MIT-MC cc: Greenberg.Multics at MIT-MULTICS, LISP-FORUM at MIT-MC Re: eval-when But consider: (eval-when (compile) (eval-when (eval) )) When COMPLR sees the outer eval-when, it calls EVAL to process the rest of the stuff. Then EVAL sees the inner eval-when and, sure enough, it's now EVAL-time so gets evaluated! Well, when the compiler is evaluating stuff that was inside an (eval-when (compile) ...), SHOULD that be considered EVAL time or not? (Q: What time is it when the compiler evaluates something inside an (eval-when (eval) ...)? A: Time to get a new compiler.) Actually I am not sure what is right. We could simply award this case to the first person who is trying to do something reasonable and wants it a particular way...  Date: 15 August 1980 1048-edt From: Bernard S. Greenberg To: DLW at MIT-AI cc: alan at mit-mc, lisp-forum at mit-mc Re: eval-when Any consistent interpretation of (eval-when (compile)(eval-when (eval) stuff..) says that stuff must get evaluated if this form is indeed encountered at top level at compile time. Eval-when, as far as I'm concerned, is meaningful to the compiler only when encountered at top-level. Any attempt to make it say "oh, im in the compiler somehow and shouldnt eval" is the same as the old (status feature compiler) kludgery that ALAN used to use before I got converted and did eval-when. Packages loaded under the control of an eval-when that themselves contain eval-whens will not load transparently in source or object or both, etc. Macros that output eval-when's have to arrange to output them at top level, via qwerties or what have you if necessary. I think the current action is the right one.  Date: 15 August 1980 16:58-EDT From: David A. Moon To: LISP-FORUM at MIT-MC, Greenberg.Multics at MIT-MULTICS Re: eval-when The problem, as has been discussed many times in the past, is that there are no eval-when keywords to distinguish between evaluate at load time without going through the compiler expansion, and load the result of compiler processing. The standard forms for doing this are (PROGN ...) and (PROGN 'COMPILE ...) respectively. (EVAL-WHEN (LOAD) ...) is defined to be the latter since that is more commonly what you want. Would anyone care to propose an EVAL-WHEN keyword to mean the former?  Date: 15 August 1980 19:02 edt From: Greenberg.Multics at MIT-Multics To: MOON at MIT-MC (David A. Moon) cc: LISP-FORUM at MIT-MC Re: eval-when I, for one, don't see the connection between a perceived need for evalling only when actually loaded and eval-when-compile's including eval-when-eval's. Could you give us an example?  Date: 16 August 1980 18:48-EDT From: Daniel L. Weinreb To: Greenberg.Multics at MIT-MULTICS, LISP-FORUM at MIT-MC Re: eval-when ...to distinguish between evaluate at load time without going through the compiler expansion, and load the result of compiler processing. The standard forms for doing this are (PROGN ...) and (PROGN 'COMPILE ...) respectively. Naively viewed, what you seem to be talking about is a distinction between compiled code running at LOAD-time and evaluated code running at LOAD-time. Is this right? If so, why does it matter? (eval-when (load-evaluated) ...) is the best name I can think of, if this is really needed for anything. If this extension is made, (eval-when (load-compiled) ...) should also be defined, as a synonym for (eval-when (load) ...), for obvious reasons. eDate: 9 Dec 1980 2209-EST From: HEDRICK at RUTGERS To: meyers at SRI-KL, action at SRI-KL, geoff at SRI-KL, jonl at MIT-AI, lisp-forum at MIT-MC, frank at UTAH-20, cpr at MIT-MC, admin.mrc at SU-SCORE, wactlar at CMU-10A, steele at CMU-10A, rindFLEISCH at SUMEX-AIM cc: josh at RUTGERS, amarel at RUTGERS, levy at RUTGERS Re: is it sensible to think of an extended address Lisp for the DEC-20? We are trying to do a bit of long term system planning here. One thing that seems fairly obvious to us is that almost everybody who is doing Lisp development work is using dedicated ("personal") processors of one sort or another. We can understand this at Xerox-Parc, where they can afford to put the equivalent of 168 in everybody's office. But it seems to us that at more "normal" places a mixed strategy may be more appropriate, with this sort of high-performance personal computer being used only for groups with very heavy usage. If this makes sense, it seems that we should be concerned with keeping Lisp for PDP-10 class machines viable. In this context, an obvious project would be to produce a version of Lisp that takes advantage of the DEC-20's extended address space. However it seems to us that this is not being done by any of the people we would normally expect to do it. What we are curious about is whether this is because there is some good reason why the project doesn't make sense, or just because everyone is so busy with shiny new toys (I don't intend this disparagingly) that no one has had time to worry about our old workhorse?  Date: 9 Dec 1980 1955-PST From: Mark Crispin To: HEDRICK at RUTGERS, meyers at SRI-KL, action at SRI-KL, geoff at SRI-KL, jonl at MIT-AI, lisp-forum at MIT-MC, frank at UTAH-20, cpr at MIT-MC, wactlar at CMU-10A, steele at CMU-10A, rindFLEISCH at SUMEX-AIM cc: josh at RUTGERS, amarel at RUTGERS, levy at RUTGERS Re: is it sensible to think of an extended address Lisp for the DEC-20? I think the problem is that everybody is so busy of shiny new toys that they are disdainful of our old workhorse. The PDP-10 has this problem that it has a reasonable machine architecture, so reasonable in fact that it is a delight to program in its assembly language. For this and this alone, it and 15 years of software are being condemned to the scrap heap by those who would rather run 6 user UNIX systems on multi-megabyte VAXen. Arguments about the address space are silly; extended addressing fixes all this. I certainly support the idea of an extended addressing LISP. -- Mark  Date: 10 December 1980 1208-EST (Wednesday) From: Guy.Steele at CMU-10A To: HEDRICK at RUTGERS cc: meyers at SRI-KL, action at SRI-KL, geoff at SRI-KL, lisp-forum at MIT-MC, frank at utah-20, cpr at MIT-MC, admin.mrc at SU-SCORE, Howard.Wactlar at CMU-10A, rindfleisch at sumex-aim, josh at rutgers, amarel at rutgers, levy at rutgers Re: is it sensible to think of an extended address Lisp for the DEC-20? I remember all too well worrying about whether to extend MacLISP for DEC20 extended addressing, when it first was announced (rumored?). It's been some time, but I'll try to outline here why we (JONL, I, and whoevere else was around at the time) didn't do it: (1) There is an obvious problem regarding data formats. Either you make pointers larger than 18 bits, or you don'. If you don't, then you can only take advantage of the extra address space by filling it with compled code (which might have been acceptable for MACSYMA), and additionally you could keep special things like print name strings outside the "primary" 18-bit address space which pointers can reference. The reason one might not make pointers bigger is to preserve as much compatibility as possible with old code (e.g. HLRZ=CAR, HRRZ=CDR). On the other hand, this doesn't really work either, because as I recall indexing off a pointer whose LH is zero means "current 18-bit address space", not "address space zero". One might use HLRO and HRRO for CAR and CDR, and make the last 18-bit address space be the one for data, but already it's getting kludgy. Also, there's the problem of not being able to put list-structure constants as immediate data, e.g. (CAIN B (QUOTE FOO)) as an instruction. On the other hand, if you make pointers be greater than 18 bits, the data formats have to change radically; in particular, list cells noiw occupy two words. Thus you have already lost a factor of two on space before beginning to get any back. This could be expected to have a severe effect on virtual memory performance, particularly on programs that didn't need extended addressing anyway, which are many. (2) After number (1), you want another reason??? The problem is that DEC20 extended addressing is really a kludge, and it's not a simple question of a smooth upgrading of an existing LISP system. We estimated that over half the existing code would have to be rewritten; given the choice of doing that or holding off and waiting for the LISP Machine or VAX, the latter seemed more attractive and profitable. P.S. I just remembered that using HRRO and HLRO didn't work either, because DEC20 extended addressing called for ignoring the LH of an index if the sign bit was set (this was to make indexing off AOBJN pointers still work, in some sense). I told you it was a kludge!  Date: 10 Dec 1980 1311-EST From: JoSH Re: a note on extended addressing lisp for 20's The idea under consideration is to ditch the current format completely, rewrite the whole assy. language kernel (much smaller than Maclisp's), and use two-word cons cells. The comparison shouldn't be made between this and a standard pdp10 lisp, but between this and another "big" lisp (say on VAX). such a lisp would need a similar amount of physical memory for a similar list structure (unless cdr-coded [is NIL cdr-coded?]), and we would regretfully leave behind the innocent days of our youth when everything could fit into 18 bits (or 16, or 12, ...). --JoSH ps: to clarify: the question is, "Is there any compelling reason why, given you need a large address space lisp, not to write one for a 20 (as opposed to, say, a VAX, assuming that you would have to start fresh either way)?"  Date: 10 Dec 1980 10:14 PST From: Deutsch at PARC-MAXC To: HEDRICK at RUTGERS cc: meyers at SRI-KL, action at SRI-KL, geoff at SRI-KL, jonl at MIT-AI, lisp-forum at MIT-MC, frank at UTAH-20, cpr at MIT-MC, admin.mrc at SU-SCORE, wactlar at CMU-10A, steele at CMU-10A, rindFLEISCH at SUMEX-AIM, josh at RUTGERS, amarel at RUTGERS, levy at RUTGERS Re: is it sensible to think of an extended address Lisp for the DEC-20? I am basically in agreement with Guy Steele's discouraging comments. However, you might contact Cordell Green at SCI. He and some SRI people were the only ones I know who were interested in extended-address Lisps for the DEC10/20. (Actually, he was interested in it as one alternative; another was to put ByteLisp, the PARC large-address-space Interlisp implementation, on the Foonly machines using special microcode.)  Date: 10 December 1980 1444-EST (Wednesday) From: Guy.Steele at CMU-10A To: HEDRICK at RUTGERS cc: lisp-forum at MIT-MC Re: is it sensible to think of an extended address Lisp for the I didn't say that using two words per CONS cell was a kludge; I said that DEC20 extended addressing is a kludge. The giveaway is the word "extended" -- if it's extended, then it doesn't fit well into the architectural framework. As for my long list of reasons, perhaps I neglected to make it clear that they were not reasons for never doing it, but reasons why MacLISP didn't naturally "grow" into an extended-addressing implementation. Bringing up NIL on a DEC20 wouldn't be too much worse than bringing it up on any other machine. An extended-addressing DEC20 is a fine machine, viewed as such. It is not, however, to be viewed as a terribly upward-compatible version of the PDP-10 architecture; the use of extended addressing would pervade everything, however superficially similar the instruction sets appppear to be. (DEC bills the VAX as a member of the PDP-11 family, too, but calling a tail a leg doesn't make it one.) --Guy  Date: 10 Dec 1980 1221-PST From: Rindfleisch at SUMEX-AIM To: HEDRICK at RUTGERS, meyers at SRI-KL, action at SRI-KL, geoff at SRI-KL, jonl at MIT-AI, lisp-forum at MIT-MC, frank at UTAH-20, cpr at MIT-MC, admin.mrc at SU-SCORE, wactlar at CMU-10A, steele at CMU-10A cc: josh at RUTGERS, amarel at RUTGERS, levy at RUTGERS, RINDFLEISCH at SUMEX-AIM Re: is it sensible to think of an extended address Lisp for the DEC-20? In response to the message sent 9 Dec 1980 2209-EST from HEDRICK@RUTGERS I think it is very worthwhile to develop extended adr space LISP's for 20xx's. Based on current projections, we at SUMEX feel that the next five years or so will be hallmarked by a pragmatic, heterogeneous approach to meeting AI computing needs and that work will proceed using 20xx's, VAX's, and personal machines of various kinds. There are clearly a number of (highly visible) projects started to develop and explore alternatives to large central 20's to achieve better solutions to adr space limits, man/machine interactions, cost-effective dissemination of AI pgms, etc. This is all healthy and research dollars flow readily to explore such new things as they should. Still, most active places are buying more 2060's to meet immediate capacity needs and to take advantage of the beautiful stock of existing software. No alternative large machine appears on the near horizon (< 3 yrs). I am uncertain about the amount of effort required to convert systems like INTERLISP or MACLISP to use 20-style extended addressing. I've followed a number of pointers leading to Alice Hartley at BBN who is reputed to have spent 6-9 mo. looking at the INTERLISP problem and finally to have given up (she has not answered my msg asking for details so I can't confirm this). Some LISP providing a good programmer environment and also (at least projected to be) available on VAX's and some flavors of personal machines must be adaptable to use extended addressing on the 20. Such a system could get very popular very quickly... Tom R.  Date: Friday, 12 December 1980 13:32-EST From: Chris Ryland To: Rindfleisch at SUMEX-AIM, HEDRICK at RUTGERS, RINDFLEISCH at MIT-EECS, action at SRI-KL, admin.mrc at SU-SCORE, amarel at RUTGERS, frank at UTAH-20, geoff at SRI-KL, jonl at MIT-AI, josh at RUTGERS, levy at RUTGERS, lisp-forum at MIT-MC, meyers at SRI-KL, steele at CMU-10A, wactlar at CMU-10A Re: is it sensible to think of an extended address Lisp for the DEC-20? I agree entirely with Tom's remarks about the worth of an extended Lisp for the -20. Most sites are facing a 5-year wait before personal machines will be powerful enough and cheap enough to even consider giving one to each senior research type; and, most of us can't afford to leave behind all the -20 software and jump on the VAX bandwagon as long as there are -20's in our stables. The 2080 will make the economics of using larger-scale machines even more attractive, at least for 3-4 more years (in case people haven't heard much about the 2080, it's a machine 3-5x the speed of a 2060 for a little less cost, with high internal redundancy for reliability (the first field test CPU will probably show up at MIT in 9-11 months, though it won't reach the marketplace for another 1.5 years); it will also have an Ethernet interface as a standard part of the front-end, with a Xerox 5700 (or something similar) as a page printer option). However, Guy is entirely right about the horrific kludginess of the -20's extended addressing scheme. It's only practical to consider using it in something like the NIL project, which is based on a machine-language kernel of modifiable size and which can afford to suffer the slings of outrageous address structures. It's probably useless to toy with the idea of modifying systems such as MacLISP and Interlisp, which are rife with assumptions about cons cell structure. (BTW, what ever happened to BBN's proposal to microcode the Prime 750 to run Interlisp, just out of curiousity?)  Date: 14 Dec 1980 0913-EST From: HEDRICK at RUTGERS To: Rindfleisch at SUMEX-AIM, HEDRICK at RUTGERS, cpr at MIT-MC, action at SRI-KL, admin.mrc at SU-SCORE, amarel at RUTGERS, frank at UTAH-20, geoff at SRI-KL, jonl at MIT-AI, josh at RUTGERS, levy at RUTGERS, lisp-forum at MIT-MC, meyers at SRI-KL, guy.steele at CMU-10A, wactlar at CMU-10A, bboard at RUTGERS Re: extended addressing In view of the varied responses to my question about extended addressing Lisp, I thought it would be interesting to try an experiment. So I have implemented a subset of Lisp on an experiemental basis, using 2-word cons cells. What I find is that most of the comments about the supposed disadvantages of extended addressing seem not to be true, but there are certainly a few holes in the support. In more detail: The implementation is an A-List Lisp, taken right out of the Lisp 1.5 document (McCarthy et. al). EVAL and APPLY were coded right out of that book. Enough functions were coded to make it sufficient for experimental purposes. (55 of them are handcoded, and an init file is automatically read, to allow others to be defined in Lisp.) Here are some tentative conclusions: (1) Performance measures are hard to get, because the technology of the interpreter isn't good enough to provide a fair test of extended addressing. (The first test showed that it was spending about 95% of its time in ASSOC, presumably searching the Alists for variable bindings and atom property lists for function definitions.) You can get something that looks like a real lisp in a few days of programming, but it doesn't work like a real Lisp. (2) The supposed "kludgey" nature of extended addressing did not show up. I found that it did exactly what I wanted, and I don't think any more code was needed than for an 18-bit implementation. As you may know, in principle there is a 30-bit address space. We use this as - 1 mark bit - 5-bit type field - 30-bit data Having a mark bit and type field in each object makes the garbage collector much easier to write (and presumably more efficient). Also, it seems natural not to have wierd reprentations for numbers and other objects, since 30 bits is enough to store most of what you want (even atom headers, it turns out). Basically you get global addressing only when you want it. References within the code all come out as local automatically. The only global addresses come from following Lisp pointers. Even in 18-bit implementations, a CDR consists of move a,thing hlrz a,(a) It really is no worse to do move a,thing move a,1(a) (which is what you do, given that addresses are stored in global form, i.e. with the high-order bit off). (3) I have heard that private pages do not exist in extended addressing. There is no evidence of this. We use the SMAP% jsys to create several private sections. Once a section is created, pages appear when referred to, as always. (4) There are bugs and holes. The major ones we have found are as follows: - DDT must be fixed, or it thinks that the first 20 locations of every section are in the AC's. (Thus programs that work normally fail when $X'ed.) The address space is in fact continuous, once you get above section 1. You only get the AC's when you are making a local reference, which is typically in code, not data. That is, SKIPE 1 will give you AC 1, even when running in section 3. But if AC 1 contains 10,,1 MOVE 2,(1) will give you address 10,,1, not AC 1. The patch to fix this has been sent to the Top-20 distribution list. - There is some strange problem such that ^C followed by running another program hangs the job. If you ^C do a RESET command, and then do something else, everything is fine, as long as you wait for about a second between the ^C, then RESET, and the next activity. But if you skip the RESET, or do things too fast, your job hangs for about 5 min (and then recovers). - We got a J0NRUN while I was doing something in a particularly large core image. I conjecture that this indicates a problem, although it could also be a disk hardware problem. We will have to investigate this further. - The SSAVE jsys doesn't seem to work for extended core images. This may not be a problem, since one probably needs a more compact way of saving these things, more like Interlisp's sysout. It is painful for a quick hack like my existing system, though. (I do have disk I/O, so you could write a DSKOUT easily enough.) These don't seem too outlandish, given that we may be almost the first people to try this sort of programming. My overall conclusion is that the only real penalty for extended addressing is that cons cells take up twice as much space. (I don't think CDR-coding is practical, though someone else may have a way to do it.) Other than this, it looks like the coding comes out a bit better than in the 18-bit implementation, because of the room for a mark bit and type codes. Whether we actually do anything with this depends upon local priorities. By the way, I am not clear why people think that VAX has a larger address space than the 20 (or do people think that?) As far as the architecture, the 20 has 30 bits and the VAX 31 (one bit is used by the system). But clearly 30 bits of 36-bit words are more than 31 bits of 8-bit bytes. Now the obvious reply is that the existing 20 monitor does not support more than about 23 bits at the moment. However I have done the following calculation to see how much one can actually use on the VAX. The problem turns out to be page tables. The VAX has a 512 byte page size. Suppose that one 32-bit word is required in a page table for each page. This means you need almost 10% as much physical space for the page table as your virtual memory (4 bytes per 512 bytes). So with a 3 Mbyte machine you can get only about 30Mbyte of virtual memory before filling up all of physical memory with page tables. Of course something is likely to give out before that. But suppose that you can really get 32Mbyte of virtual memory. That is in fact slightly less than the 23 bits that are implemented on the 20 (assuming 2 words per cons cell on the 20 and 8 bytes on the VAX). It seems to me that 512 bytes is a strange page size for a machine intended to have 31 bits of virtual address space. If anyone wants to try my experimental Lisp, it is lying around on line at the moment (though I don't know how long we will keep it). On at Rutgers, use the following files: EXLISP.EXE INIT.EXLISP - should be on your area - read at startup to define a few useful functions. If you don't have a file by this name, Exlisp will complain at startup, though it will still work. EXLISP.MID - source (compile it with Midas - it produces an .EXE file directly) EXLISP.DOC - documentation You can retrieve files from here over FTP by logging in as ANONYMOUS. Any password will do. (actually at the moment FTP seems not to require any login to do retrieves.) PS: Please don't confuse this EXtended LISP with the new Rutgers/UCI Lisp, currently known as Xlisp. It is based upon the normal 18-bit design.  Date: Sunday, 14 December 1980 10:25-EST From: Chris Ryland Reply-To: CPR at MIT-MC To: HEDRICK at RUTGERS cc: Admin.Bosack at SU-SCORE, Rindfleisch at SUMEX-AIM, action at SRI-KL, admin.mrc at SU-SCORE, amarel at RUTGERS, frank at UTAH-20, geoff at SRI-KL, guy.steele at CMU-10A, jonl at MIT-AI, josh at RUTGERS, levy at RUTGERS, lisp-forum at MIT-MC, meyers at SRI-KL, wactlar at CMU-10A Re: extended addressing Chuck: Your experiment is extremely interesting. However, there are still some serious problems with the monitor apropos extended addressing: we have found several JSi that crash the system when called from non-zero sections, or do wierd things (CFORK, CLOSF, etc). I suspect, though I can't produce a lot of evidence, that a large amount of work is in store if we wanted to make TOPS-20 REALLY support extended addressing. (Len, can you comment?)  Date: 15 Dec 1980 0706-EST From: HEDRICK at RUTGERS To: CPR at MIT-MC cc: Admin.Bosack at SU-SCORE, Rindfleisch at SUMEX-AIM, action at SRI-KL, admin.mrc at SU-SCORE, amarel at RUTGERS, frank at UTAH-20, geoff at SRI-KL, guy.steele at CMU-10A, jonl at MIT-AI, josh at RUTGERS, levy at RUTGERS, lisp-forum at MIT-MC, meyers at SRI-KL, wactlar at CMU-10A Re: extended addressing Oops... I had been up too long when I sent that message. Please ignore the comments on VAX virtual address space. There was both a calculational and a conceptual error. Actually I suspect that the real limit to usable virtual address space is physical memory, and that we are far from having enough physical memory on most systems to come near the address space supported. So the real issue would be which operating system does a better job of paging, if indeed there is any difference at all.  Date: 10 January 1981 12:12-EST From: Robert W. Kerns To: CPR at MIT-MC, LISP-FORUM at MIT-MC, action at SRI-KL, geoff at SRI-KL, meyers at SRI-KL, frank at UTAH-20, jonl at MIT-AI, admin.mrc at SU-SCORE, guy.steele at CMU-10A, wactlar at CMU-10A, Rindfleisch at SUMEX-AIM, HEDRICK at RUTGERS, amarel at RUTGERS, josh at RUTGERS, levy at RUTGERS, bboard at RUTGERS Re: extended addressing Sorry to take so long to getting around to reading your message. It sort of got buried in the depths of my mail file for a while. Anyway, you seem to not fully understand the VAX architecture in comparing the address space of the VAX with that of the 10. By the way, I am not clear why people think that VAX has a larger ; address space than the 20 (or do people think that?) As far as the architecture, the 20 has 30 bits and the VAX 31 (one bit is used by the system). But clearly 30 bits of 36-bit words are more than 31 bits of 8-bit bytes. Now the obvious reply is that the existing 20 monitor does not support more than about 23 bits at the moment. However I have done the following calculation to see how much one can actually use on the VAX. The problem turns out to be page tables. The VAX has a 512 byte page size. Suppose that one 32-bit word is required in a page table for each page. This means you need almost 10% as much physical space for the page table as your virtual memory (4 bytes per 512 bytes). So with a 3 Mbyte machine you can get only about 30Mbyte of virtual memory before filling up all of physical memory with page tables. Of course something is likely to give out before that. But suppose that you can really get 32Mbyte of virtual memory. That is in fact slightly less than the 23 bits that are implemented on the 20 (assuming 2 words per cons cell on the 20 and 8 bytes on the VAX). It seems to me that 512 bytes is a strange page size for a machine intended to have 31 bits of virtual address space. First, you make a numerical error. One 4-byte PTE for each page is the right number, but that only works out to 4/512 or 1/128, or almost 1%, not 10%. Second, Unlike on the -20, VAX page tables are not wired into memory, but can themselves be paged. This may sound strangely recursive, but rest assured that it ends at one level; the page tables for the page tables are wired. Thus physical memory is consumed at one longword (32 bits) for every 128 pages, for a ratio of 2^14 to 1. Now, 32 Mbytes is sort of a random number, and I'm not sure where it comes from. I believe it may be a former or even current limit on what VMS will support. But to compare with the -20: 2^23 36-bit words (8 MQ) or 2^30 36-bit words (1024 MQ) vs. 2^29 32-bit words, or 2^28 if you don't use P1 space (normally used for the stack, etc., and allocated DOWNWARD). Now, lets consider the real problem. For each user you have this big, you need a large disk drive. On the -20, for example, a full 30-bit address space would require something like a 5 Gigabyte disk drive. (or smaller ones totalling that). It seems clear to me that for the current time, the limit to address space in both the VAX and the -20, is not how many bits the processor uses, but rather how much disk space you can afford for paging. Yes, folks, we're limited by the physical memory size again... WDate: 23 November 1980 16:29-EST From: Kent M. Pitman Re: (more) problems with (STATUS FEATURES) I would like to see a (STATUS DIALECT) introduced (or some similar function, if people don't like STATUS, then (LISP-DIALECT) would do, I guess) so that we could do (CASEQ (STATUS DIALECT) ((MACLISP) ...) ((LISPM) ...) ((NIL) ...) ((INTERLISP) ...) ((FRANZ) ...) ((IBMLISP) ...) ((LISP1/.5) ...) (T ...)) instead of (COND ((STATUS FEATURE MACLISP) ...) ((STATUS FEATURE LISPM) ...) ...) since some of the FEATURE things are already ambiguous. Eg, (STATUS LISPM) doesn't supply information to the reader about whether the reason for conditionalizing was one of Operating System, Hardware Configuration, or Dialect. I think there should be a way of making further distinction so that people coming along later can infer information about the intent of the code they see, instead of the ambiguous coding style that now results. ie, this is a problem that shows up when you later want to add more conditionals to a set of conditionals someone else has written. Such a change would have to be put into at least LispM, NIL, and Maclisp to be useful. Does anyone (dis)like this idea? This would help complete fill out some of the gaps we left when (STATUS FILES) and (STATUS OPSYS) -- or whatever their longer names were -- went in. (STATUS MACHINE) => LispM, PDP10, TOPS-20, etc. might also be a useful (subtlely different than (STATUS OPSYS)) option, as well... I don't oppose totally different schemes which people have for handling current and target features in a better way. I'm just pointing out that it's getting painful to understand the semantics of all of the things on the features list. They really need to start being organized into named slots as that (STATUS OPSYS), etc. functionality helps to achieve the spirit of, at least... -kmp  Date: 23 Nov 1980 16:21:28-PST From: CSVAX.jkf at Berkeley To: KMP at MIT-MC cc: lisp-forum at mit-mc Re: (more) problems with (STATUS FEATURES) How do you propose that (STATUS DIALECT) Be accessed with the sharp sign macro, if at all? Or are you proposing that (STATUS DIALECT) be used during run time and (STATUS FEATURE SYSTEM) be used at compile time?  Date: 24 November 1980 02:43-EST From: Kent M. Pitman To: CSVAX.jkf at BERKELEY cc: LISP-FORUM at MIT-MC Re: (more) problems with (STATUS FEATURES) Date: 23 Nov 1980 16:20:43-PST From: CSVAX.jkf at Berkeley How do you propose that (STATUS DIALECT) be accessed with ... sharp sign... ----- I don't. We are finding, as I am sure you are aware, that too much #+ and #- can make code unintelligible. I don't propose to add more hooks into the #-mechanism for abbreviating this idea -- at least not until we've played with it in vanilla code for a while. Right now I am just concerned in making the dialect, etc. being determinable at all in any straightforward way.  Date: 25 November 1980 12:52-EST From: Robert W. Kerns To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC, "(FILE [LISP;SHARPC DOC])" at MIT-MC Re: Feature double creature Well, I suppose I should at this time (I've been postponing it) propose my FEATURE-SET proposal. First off, a couple notes about KMP's proposal: 1) I think the need for a canonical way of determining which dialect the code is running under is clear. 2) I don't see any reason for it to be a STATUS mumble rather than a variable. 3) The issue is separate from issues of sharp-sign conditionalization and feature-testing. Sharp-sign conditionalization does not want to depend on what features are PRESENT AT READ TIME, but rather WHAT FEATURES ARE PRESENT AT TIME-OF-USE. This last is the motivation for my proposal. Some of you have seen this before. I've long intended to bring this out as an alternate mode for the existing MACLISP SHARPM package, but haven't had the time for integrating my code into the current version. But since it turns out I need it for cross-compilation in NIL, I've just fixed it up. 1) A single LISP environment often has need of reading forms intended for different LISP environments. For example, a compiler may be doing READ for forms which may be EVAL'ed in its environment, or which may be EVAL'd in some other LISP with possibly different features (probably not the COMPLR feature!) with or without compilation. I.e. the line #-FOO in a source file usually refers to a feature FOO in the LISP that the compiled file will run in. (I'll get into exceptions to this later). The LISP where it will eventually matter whether the FOO feature exists I will call the TARGET LISP. The target LISP may or may not be the same LISP that is doing the READ that see's the #+FOO. 2) It is not really enough to say that because a feature is not on a list of features somewhere that a feature is not present. Sometimes this is because whoever told the compiler what features exist simply did not know all about the target environment (I.e. I'm compiling this file for use with the RWK-HACKS feature). Sometimes you know that a feature is NOT present, like FRANZ in the MACLISP compiler. The obvious solution is to maintain TWO LISTS, of KNOWN FEATURES and NON-FEATURES. What do you do if a feature is not known? Two options are to signal an error, and to ask the user. I prefer the later. 3) It is not enough to have simply ONE set of features. When doing a LOAD in a COMPLR, for example, one generally intends for that code to be evaluated in the compiler, so the LOCAL LISP is the target, and the LOCAL FEATURES should be brought to bear. When reading for compilation, COMPILATION-TARGET-FEATURES should be what is used (by default) for FEATUREP. So! I define the following goodies: [This stuff is available by loading ((LISP) SHARPCONDITIONALS) either on top of or instead of ((LISP) SHARPM)] defun DEF-FEATURE-SET target features &optional nofeatures (query-mode ':QUERY) Defines a feature-set for a target environment named , with the supplied features and non-features. If is :QUERY, the user will be asked each time. If is :ERROR, an error is signaled. If is T, FEATUREP of things unknown returns T, while if it is (), it returns (). [This query-mode stuff is merely to satisfy any namby-pambies out there who don't like this winning stuff for one reason or another. If nobody dislikes this stuff, maybe it should be flushed as being excess hair.] defvar TARGET-FEATURES default LOCAL-FEATURES The default feature-set for FEATUREP and the sharp-sign macro. This should be bound around calls to READ when the result of the READ will be understood in some other environment, such as compilers. defun FEATUREP feature &optional (feature-set TARGET-FEATURES) (featurep T) Returns T if is known to be a feature in . is a symbol which has been defined to be a feature-set by DEF-FEATURE-SET. If it the feature is not known to be either a feature or not a feature, action is taken according to the query-mode of the feature set. (see DEF-FEATURE-SET) featurep is a purely internal flag which if () turns FEATURP into NOFEATUREP. defun NOFEATUREP feature &optional (feature-set TARGET-FEATURES) like FEATUREP, except returns T if known NOT to be a feature. FEATUREP will take not only a feature name, but a generalized FEATURE-SPEC. A feature-spec is a feature name, or (OR f1 f2 ... fn), meaning T iff f1, f2, ... OR fn satisfy the FEATUREP test, (NOT f) [meaning the same as (nofeature f)], (AND f1 f2 ... fn) meaning T iff f1, f2, ... AND fn satisfy the featurep test. In addition to these familiar operators can be used any name of a FEATURE-SET. I.e. (FEATUREP (AND (LOCAL-FEATURES F1) F2)) returns T iff F1 is a feature in LOCAL-FEATURES and F2 is a feature in whatever feature set is the value of TARGET-FEATURES. This may have some limited application in conjunction with #+ and #-, but should not be used wantonly. #+ and #- are followed by a feature-spec and another form. #+ calls FEATUREP, and #- calls NOFEATUREP on the feature-spec, and if () is returned, the next form is gobbled and thrown away. Note that it is gobbled via a recursive READ, so if it contains illegal syntax or #.'s, an error may result. Note that using #- and #+ inside forms conditionalized with #- and #+ can quickly result in unreadable and unmaintainable code. A few functions for manipulating FEATURE-SETs: defun COPY-FEATURE-SET old new This function makes a new feature-set having the same known features, non-features, and query mode as the old. It is probably the right way to create new feature sets for related environments. defun SET-FEATURE feature &optional (feature-set-name TARGET-FEATURES) This function makes be a feature in the feature-set named by . (FEATUREP ) will thereafter return T. defun SET-NOFEATURE feature &optional (feature-set-name TARGET-FEATURES) This function makes be a non-feature in the feature-set named by . (NOFEATUREP ) will thereafter return T. defun SET-FEATURE-UNKNOWN feature &optional (feature-set-name TARGET-FEATURES) This function makes be unknown in the feature set. Depending on the query-mode, FEATUREP and NOFEATUREP may query, give an error, or assume the default values thereafter. defun SET-FEATURE-SET-QUERY-MODE feature-set-name mode sets the feature-set query-mode to . can be one of :QUERY, :ERROR, T, or (), as described above.  Date: 26 November 1980 15:58-EST From: Kent M. Pitman To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: FEATURE-CASEQ and TARGET-FEATURES Date: 25 November 1980 18:12-EST From: Robert W. Kerns To: NIL-I (FETURECASEQ 'LOCAL-FEATURES ((AND MACLISP (NOT FORMAT)) (PRINT "What a loss")) (FRANZ (ERROR)) ((OR MULTICS LISPM) (PRINT "Big address space")) (T "I'm lost!")) (FEATURE-SELECT-ALL 'LOCAL-FEATURES ....) [The same, but perform ALL clauses for which FEATUREP test wins]. What think you? ----- Well, calling it xCASEQ is probably a bad idea since the AND/OR idea is not valid in CASEQ. Probably something like FEATURE-COND or FEATURE-TEST would be more flavorful if you wanted to allow AND/OR/NOT to be in the list. How about FEATURE-TEST for finding first clause that works and FEATURE-TESTS for finding all clauses that work...? In general, I think this concept of TARGET-FEATUREs, etc. is really the right way to go. I think we have seen in Macsyma and NIL that the current concept of features is too restrictive and ambiguous to be very reliable. =Date: 12 March 1981 11:31-EST From: Richard J. Fateman To: LISP-FORUM at MIT-MC, MOON at MIT-MC, GLS at MIT-MC Re: fix,fixr If you want to look at the IEEE standards work for guidance on fix/trunc/round, counsider that the standard has round toward {zero, minus inf, plus inf, and unbiased} as modes, and the unbiased does not leave undefined the rounding from half-way, but specifies round to nearest even, in that case. E.g. 1.5 -> 2, 2.5 -> 2. This provides a systematic avoidance of bias. (Round to nearest odd is similar, but not used in standard)  *Date: 13 January 1981 1414-EST (Tuesday) From: Guy.Steele at CMU-10A Re: Desirable macro I have often wanted a macro (call it FROTZ for now) that would do: (FROTZ + (CAR FIZZLEBLORT) TOOTSNOOTER) ==> (SETF (CAR FIZZLEBLORT) (+ (CAR FIZZLEBLORT) TOOTSNOOTER)) (Actually, I would rather write ((FROTZ +) (CAR FIZZLEBLORT) TOOTSNOOTER) but such is life.) People will readily see that FROTZ FOO is similar to the "gets-foo" operator of Algol-like languages: A +:= B means A := A + B A *:= B means A := A * B Of course, the combination FROTZ CONS already exists, and is called PUSH. But FROTZ is also useful with + and also with ADJOIN or ADJQ: (DEFUN ADJOIN (X Y) (IF (MEMBER X Y) Y (CONS X Y))) (DEFUN ADJQ (X Y) (IF (MEMQ X Y) Y (CONS X Y))) These are the "add-unique" or "add-to-set" operations. (DEFUN ADJPROP (X V Y) (FROTZ ADJQ (GET X Y) V)) (although there are more efficient ways to implement this particular case). So how about it? And what to call it? Actually, one could compatibly call it PUSH (the three-arg case)! But it would be wrong. I can't think of anything else to call it. --Guy  Date: 13 January 1981 19:23-EST From: Earl A. Killian To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Desirable macro Date: 13 January 1981 1414-EST (Tuesday) From: Guy.Steele at CMU-10A (Actually, I would rather write ((FROTZ +) (CAR FIZZLEBLORT) TOOTSNOOTER) but such is life.) FROTZ could be a macro that defines a gensym appropriately and returns it. Now if there were anonymous macros like there are anonymous functions (i.e. LAMBDA), then it'd be even more straightforward.  Date: 13 January 1981 22:35-EST From: Daniel L. Weinreb To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC Re: Desirable macro Since it is so unclear what to call it, I suspect people would have a hard time remembering any name we come up with. And PUSH is so easy to write that it seems like you might as well let the user write things like it if he wants. (I've been feeling a bit minimalist lately, generally feeling that we have too many features already that do somewhat random things.)  Date: 13 January 1981 22:43-EST From: Daniel L. Weinreb To: EAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: Desirable macro Making FROTZ a macro of any sort cannot make ((FROTZ ...) ...) evaluate correctly. The evaluator recognizes macro invocations as lists whose symbol is a car which is the name-symbol of a macro. Lists whose cars are not symbols are not macro invocations. The cars of random lists being evalutaed are not, themselves, evaluated.  Date: 13 January 1981 23:17-EST From: Earl A. Killian To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC Re: Desirable macro Yeah, I was thinking in SCHEME not LISP when I suggested the hack. Sigh. Even in SCHEME whether it would work or not would depend on the exact order the interpreter did things.  Date: 14 January 1981 1107-EST (Wednesday) From: Guy.Steele at CMU-10A To: EAK at MIT-MC cc: lisp-forum at MIT-MC Re: Desirable (?) macro Date: 13 January 1981 19:23-EST From: Earl A. Killian Date: 13 January 1981 1414-EST (Tuesday) From: Guy.Steele at CMU-10A (Actually, I would rather write ((FROTZ +) (CAR FIZZLEBLORT) TOOTSNOOTER) but such is life.) FROTZ could be a macro that defines a gensym appropriately and returns it. Now if there were anonymous macros like there are anonymous functions (i.e. LAMBDA), then it'd be even more Maybe macro-operators, like operators, would be useful things in a language. (Operators, or functionals, take arguments and produce a function. A macro-operator would take arguments and produce a parameterized macro. Interesting idea...  Date: 15 JAN 1981 0607-EST From: THUNDR at MIT-AI (Susan Lynn Thunder) Sender: RMS at MIT-AI Suppose we say that an explicit macro definition of the form (macro . fn) is allowed as the car of an expression, just like an explicit lambda. Then suppose we also define "lambda-macros" which are symbols used where lambda might be used, but which define a function differently. The list starting with the lambda-macro is passed to the lambda-macro which is supposed to return an expansion which is a valid function. Using these two features together, ((MODIFY CONS) FOO BAR) can turn into ((MACRO LAMBDA (FORM) ... definition such as PUSH could be defined with) FOO BAR) which would turn into (SETF FOO (CONS FOO BAR)).  Date: 15 January 1981 17:56-EST From: Kent M. Pitman Re: Case study: Macro forms as operators in Maclisp For better or worse, let me describe what it is that Maclisp currently does with the things in the car of a form, because it may bring to light some potential problems in the ideas being discussed: If the object is a symbol, a functional property is looked for. If the object is some other kind of atom, I think an error is generated. Else if non-atomic, it is checked for one of a few special forms (LAMBDA forms, FUNARGs, and such) ... and finally if it's not one of those, it is normal Lisp EVAL'd to get something that can be re-run through this procedure. After about 2 or 3 iterations it punts if it hasn't gotten something it can use. The interesting thing is the case where the normal Lisp EVAL happens. So if you take the following two definitions: (DEFMACRO FIRST () 'CAR) (DEFUN DEMO () ((FIRST) '(A B))) and you run them in the interpreter and compiler you get two different effects. In the interpreter, Lisp EVAL means if it's a macro you get the double-evaluation syndrome. So ((FIRST) '(A B)) will err out in the interpreter because the interpreter Lisp EVAL's the car getting back CAR for the macro expansion and then re-eval'ing that to NIL and then trying to funcall the NIL on '(A B) and losing. Interestingly, there is a misfeature of MACROEXPAND in Maclisp such that (EVAL '((FIRST) '(A B))) loses but (EVAL (MACROEXPAND '((FIRST) '(A B)))) `wins' returning A. (What the right thing to do here is very subjective, so I use the term wins and loses *very* loosely.) Anyway, in the compiler, no EVALs are being done, so ((FIRST) '(A B)) is only macroexpanded in one pass and you end up with (CAR '(A B)) which it forgets needed the extra eval on the car of the form, resulting in an accidental `correct' compilation of DEMO so that it will return A instead of getting the NIL UNDEFINED FUNCTION OBJECT type error. I sincerely hope that a whole pile of code relying on these oddities will not be written as a result of this message, but I do hope it will point up that in different contexts, it may be very complicated to keep track of the number of evaluations needed. I suspect what this says is that Maclisp should not allow random forms in the car of an s-expression (requiring funcall if that's what the guy wants) and if it sees random forms, it should assume they are macro forms, macroexpand them WITHOUT evaluating the result, and then try to apply the expansion. This would probably be the easiest scheme to retain consistency. -kmp  Date: 15 January 1981 18:27-EST From: Kent M. Pitman Re: More on macros in Maclisp eg, consider the following interaction with Maclisp: (DEFMACRO IMMEDIATE-MACRO (&REST X) (LET ((FN (COPYSYMBOL '%MACRO NIL)) (VAR (GENSYM))) (EVAL `(MACRO ,FN (,VAR) (,X ,VAR))) (SET FN FN) ; Just in case of multiple EVAL (sigh) FN)) => IMMEDIATE-MACRO (DEFMACRO FIRST () '(IMMEDIATE-MACRO LAMBDA (X) `(CAR ,(CADR X)))) => FIRST (MACROEXPAND '((FIRST) '(A B)) => (CAR (QUOTE (A B))) ((FIRST) '(A B)) => ;ILLEGAL USE OF MACRO - EVAL ; The (SET FN FN) above ; probably wasn't needed since ; this loses anyway... ;In the compiler: (DEFUN DEMO (X) ((FIRST) X)) => DEMO (CL DEMO) (LAP DEMO SUBR) (ARGS DEMO (() . 1)) (HLRZ 1 0 1) (POPJ P) () => (COMMENT DEMO) If EVAL were cleaned up to allow macros only in the car of the form, then ((FIRST) '(A B)) could be made to match the other instances here and people would be able to get the macro operators they desire.  Date: 15 January 1981 18:38-EST From: George J. Carrette To: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: even more on macros in maclisp EVAL doesn't need to be cleaned up to get the correct action here, (although its a good idea). Consider the following: (defmacro compiler-choice-now (x y) `(if (memq compiler-state '(maklap compile)) ,x ,y)) (defmacro foo () (compiler-choice-now 'car ''car)) Then ((FOO) '(A B)) has the correct action in the compiler and the interpreter. [I would think that this would be obvious.] So, GLS, if you want to use operator macros GO FOR IT! -gjc  Date: 15 January 1981 18:54-EST From: Kent M. Pitman To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Ah, but the problem is that you can't return non-symbols and get the right thing to happen. Things need to be more regularly defined I think. Having ((...) ...) work as an implicit-funcall in random places is a real crock and the compiler will give you a 'rewriting with FUNCALL' message when it happens, so i doubt people really use it. Making ((...) ...) have a well- defined semantics would be a nice thing to see.  Date: 16 January 1981 0026-EST (Friday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) cc: lisp-forum at MIT-MC Re: Case study: Macro forms as operators in Maclisp "I suspect that what this says" is that things can get very confusing if all programs concerned don't use a consistent semantics. This includes not only interpreter and compiler but also MACROEXPAND, etc. Fortunately, NIL seems to have adhered to the decision to make interpreter and compiler compatible whatever else happens. Unfortunately, the Lisp Machine has not.  Date: 18 JAN 1981 0908-EST From: RMS at MIT-AI (Richard M. Stallman) To: GLS at MIT-AI cc: LISP-FORUM at MIT-AI I am at a loss to see why you accused the Lisp machine of not making the interpreter and compiler do the same thing. Perhaps you meant Maclisp? On the Lisp machine, the interpreter and compiler agree that only lists starting with lambda and symbols are meaningful as the first element of a form. I believe this is the right behavior, as far as actual computed functions are concerned. It would be useful to be able to use a macro call as the function, but it would be better if these are not the same macros that are understood at the expression level. That is because those macros all expand into expressions, not functions. So it would be better to make this a new, nonconflicting use of a symbol, just as functions, variables and prog tags are nonconflicting. These macros would be recognize at the start of a list which is supposed to be a function. That is, in a list which is the car of an expression, and in a list which is the argument to FUNCTION, and nowhere else. It's best to have no overlap between what the user can write for a function and what he can write for an expression, unless they are the completely the same (as they are in scheme).  Date: 18 JAN 1981 0914-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Problem with ((FIRST) ...) On the Lisp machine, FIRST is already a macro, one designed to expand into an expression. So if expression macros were recognized in the car of a function, ((FIRST)...) would barf because FIRST has no argument. If you gave it one, as ((FIRST FOO) ...) it would expand into ((CAR FOO) ...) which isn't what you want. This is no problem for just defining the ((FIRST) ...) macro since you could pick some other name for it. But I think it shows that it would be a feature to make function macros completely distinct from the existing expression macros.  Date: 18 January 1981 18:13-EST From: Kent M. Pitman To: RMS at MIT-AI cc: LISP-FORUM at MIT-MC Re: Problem with ((FIRST) ...) Date: 18 JAN 1981 0914-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Problem with ((FIRST) ...) On the Lisp machine, FIRST is already a macro,... ----- If I understand you correctly, you are saying that you think there should be something akin to (DEF-FMACRO ...) and some new kind of cell distinct from value or function? This would, I suppose, allow you to have an expression like ((F F) (F) F) where the first F is an F-MACRO, the second is implicitly quoted until the first one figures out what to do with it, the third is a function or normal MACRO, and the fourth is a variable? And just think about what (COND (((F F) (F) F) ((F F) (F) F))) or (LET ((F ((F F) (F) F))) ((F F) (F) F)) will do for readability ... This isn't a criticism necessarily. I probably even agree with you -- but it is an extra level of complexity in being able to read programs, and I'm just thinking aloud about it. -kmp  Date: 18 JAN 1981 2104-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Multiple use of symbols I agree that it might not be wise to use the same symbol for both a lambda-macro and an expression macro (or ordinary function, for that matter), but I think there is an advantage in being prevented from using something intended as an expression macro in place of lambda. Perhaps lambda-macro should be an alternative to function definitions (including expression macro definitions) so that anything which is a lambda macro can't be a function.  Date: 19 January 1981 02:10-EST From: Kent M. Pitman To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: lambda-macros that sounds like a good idea. i'm not sure if i agree with the name. i'm not sure if that matters. maybe just fmacro would be more consistent with lispm names like fsymeval, fboundp, etc. anyway, seems like the right general idea. presumably maclisp, if it went along, could allow (would have a hard time supressing) an fmacro property at the same time as a vanilla macro property, etc. But at least the lispm could do it right. Date: 22 October 1980 1355-EDT (Wednesday) From: Guy.Steele at CMU-10A To: GSB at MIT-ML cc: lisp-forum at MIT-MC Re: Format's C In answer to your query about FORMAT's C operation: I suggest that the following informal definitions hold: ~C Prints the argument as a character in the most concise format that will be unambiguous to a human reader and consistent with the conventions of his programming environment. Example: if alpha and beta are not available, one might print the character with ASCII code 30 octal as "C-X" or perhaps as "CTRL/X", depending on the conventions of the programming environment. "C-X" might be preferred on grounds of conciseness. "^X" would also be acceptable. ~:C Prints the argument as a lengthy but English-readable (and therefore pronounceable) name. Any "keywords" or "special names" should be capitalized full English words (to get all lowercase or all uppercase one can use STRING-DOWNCASE or STRING-UPCASE or whatever). The separator of choice is "-". [Footnote: I nearly tore into the LISP Machine's FORMAT to add uparrow and downarrow as modifer characters. The idea was that uparrow would mean upper-case output, downarrow lower-case, and both together would mean capitalized. I decided not to, because the arrows are not standard ASCII characters, and also to avoid arrest on the grounds of unwarranted baroqueness.] Examples: Control-X, Meta-Space, Tab ~:@C Does the same thing, but adds a parenthetical remark describing how to type the character if in the judgement of the implementor it may not be obvious to the user how to input that character from the keyboard he is using. Examples: Control- (Top-A) [MIT keyboard] Control-_ (Shift--) [stupid kbds where - and _ look alike] ~@C Prints the argument as a character in such a way that READ can reconstruct the argument, or its logical equivalent, from the output. Subject to this requirement, the output should be readable and concise to whatever extent possible. Examples: #/X #\RETURN #\RETURN #,(CONTROL #\RETURN) Do these general descriptions seem reasonable? --Guy  Date: 22 October 1980 22:25-EDT From: Alan Bawden To: GSB at MIT-ML, Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Format's C I was under the impression that ~C was supposed to just TYO the character argument. None of your suggested ~C's do this and it is a usefull operation to have. Date: 21 April 1981 19:15-EST From: Glenn S. Burke Re: maclisp format query does anyone use any of the functions format-fresh-line, format-tab-to, or format-formfeed? I'm also curious about how prevalent use of ?FORMAT is. Please reply only to me  Date: 11 November 1980 23:10-EST From: George J. Carrette To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Kent, the real GJC quote on macsyma tree walking actually goes something on the line of the same semantic tree walker doing the various kinds of evaluations found in macsyma, EXPAND, TRIGEXPAND, DEMOIVRE, SUBSTITUTE, EQUAL, EVAL, TRANSLATE, COMPILE. (An an embarrassingly obvious quote I'm afraid). On another note, Q: If you had a choice between: [1] writing a good macsyma->lisp translator. [2] Documenting and packaging the lisp system which macsyma runs in so that J-random user will be able to use it. What would you do? Note: To make it easy to type in various formuli some kind of parsing macro, e.g. (FORTRAN " X:=3*X+Z[J] " " Q[J]:=X*3^J ") may be made available. I must attribute the name for the macro to SOLEY, it means FORumula TRANslator. Whats the general feeling on that kind of feature? Could we even have an APL macro for us APL fans? -gjc  Date: 12 November 1980 13:48-EST From: Richard Mark Soley To: GJC at MIT-MC cc: KMP at MIT-MC, LISP-FORUM at MIT-MC Actually, I must attribute the name FORTRAN to FORTRAN, which has always stood for Formul Translator, even before I wrote the FORTRAN macro. -- richrad Date: 14 January 1981 1511-EST (Wednesday) From: Guy.Steele at CMU-10A Re: default value for GET and friends Whatever happened to the idea of letting GET, and similar functions such as GETHASH, take an optional extra argument which specifies what to return if no property is found? MDate: 21 October 1980 12:22-EDT From: George J. Carrette To: BUG-LISP at MIT-MC, LISP-FORUM at MIT-MC, NIL at MIT-MC You know what I could use many times? LABELS, ala SCHEME, however, what would suit me most of those times would be a simple extension to PROG, a GOSUB ala BASIC. (prog (common-variable-environment) (COND possibly complicated with (GOSUB FROB-1) and (GOSUB FROB-2)) FROB-1 ...stuff... (RETURN NIL) FROB-2 ...stuff... (RETURN NIL)) The value of the GOSUB will would be the value of the RETURN next executed. Question: How easy is it to add a GOSUB feature to existing lisp compilers? Note: I have various pattern matchers and finite state machines which would love to be able to macroexpand into this type of thing. There is no other way to get this functionality in present lisps without the use of special variables or variables in structures, both of which forego the efficiency of register or number-pdl allocation. (Computed GO's are out of the question of course). Question: Is this a reasonable feature to have? -gjc  Date: 21 October 1980 13:46-EDT From: Richard L. Bryan To: GJC at MIT-MC, LISP-FORUM at MIT-MC Date: 21 OCT 1980 1222-EDT From: GJC at MIT-MC (George J. Carrette) You know what I could use many times? LABELS, ala SCHEME, however, what would suit me most of those times would be a simple extension to PROG, a GOSUB ala BASIC. . . . Question: Is this a reasonable feature to have? I find it hard to believe you're serious. There are so many things wrong with it that I can hardly begin to list them all. So I won't. LABELS would be easier to implement given the internals of the compilers I've dealt with. A non-correctly-closing version of LABELS whose functions merely shared the lexical environment (hmm, sounds like a LET binding &FUNCTION variables) isn't very hard to implement. But don't hold your breath waiting for one in Maclisp.  Date: 21 October 1980 1426-EDT (Tuesday) From: Guy.Steele at CMU-10A Re: LABELS, GOSUB (yech), etc. RLB is right -- an "FLET" just like LET except for binding functional variables (or, alternatively, using Stallman's "procedural destructuring" technique, one could write (LET ((#'TEMPFN (LAMBDA ...))) ...) instead of (FLET ((TEMPFN (LAMBDA ...))) ...) ) would be handy on occasion, particularly if the compiler could integrate such local procedures when appropriate. Conversely, a "non-function" version of LABELS can be a useful thing too. See the paper by F. Lockwood Morris in the 1980 LISP Conference Proceedings, which is about exactly that.  Date: 21 October 1980 16:48-EDT From: Daniel L. Weinreb Sender: dlw at CADR2 at MIT-AI To: GJC at MIT-MC, lisp-forum at MIT-MC I, too, find it hard to belive you are serious about GOSUBs. We should definitely have internal procedures. By the way, the whole idea of LISP-FORUM is so that you don't have to CC the letter to BUG-LISP and NIL as well. Just LISP-FORUM will suffice.  Date: 21 October 1980 17:16-EDT From: George J. Carrette To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC Ok, when do we get internal procedures? You can look at GOSUB as a special case of internal procedures if you like, there is nothing wrong with GOSUB, its GO that is problematic. (Maybe you don't like the thought of GO mixed with GOSUB?) If there is some dread problem in implementing GOSUB that I fail to see then please point it out. If the problem is only programing style, that people might use GOSUB, well gee, I only want my macros to be able to expand into them.  Date: 22 October 1980 01:33-EDT From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC In the old days on the Lisp Machine, we implemented a special form called SKIPE. It had one subform, which it evaluated, and if the result was false the next item in the PROG containing the SKIPE was skipped. There was also SKIPN, which skipped if non-NIL. If we put in GOSUB, we should certainly have these as well.  Date: 22 October 1980 01:47-EDT From: Robert W. Kerns To: GJC at MIT-MC cc: DLW at MIT-MC, LISP-FORUM at MIT-MC Re: GOSUX Why not invent a BASIC special form? Much better are local functions. ((LAMBDA (&FUNCTION FOO) (LIST (FOO 1) (FOO 2))) #'(LAMBDA (A) (* A A)))  Date: 22 October 1980 1132-EDT (Wednesday) From: Guy.Steele at CMU-10A To: eak at MIT-MC, lisp-forum at MIT-MC cc: gjs at MIT-AI Re: DEFINEM Date: 21 October 1980 19:46-EDT From: Earl A. Killian Subject: LABELS, GOSUB (yech), etc. Is there any problem in defining the following in SCHEME? (DEFINEM external-list (F1 (LAMBDA ...)) (F2 (LAMBDA ...)) ...) where F1, F2, ... are defined and closed together a la LABELS, and the definitions of those functions that are also on are defined (like DEFINE) in the environment containing the DEFINEM. Example: (DEFINEM (NREVERSE) (NREVERSE (LAMBDA (X) (COND ((NULL X) '()) (T (NREVERSE1 X '()))))) (NREVERSE1 (LAMBDA (X Y) (COND ((NULL (CDR X)) (RPLACD X Y)) (T (NREVERSE1 (CDR X) (RPLACD X Y))))))) The DEFINEM acts as a sort of module definition, which only exports certain functions. Sometimes this would be clearer than using a LABELS, especially when you have several co-equal-recursive functions, only a few of which want to be externally visible. Also, this should give you the efficiency of block-compiling in Interlisp. My first idea about defining this with LABELS was to do something like (DEFMACRO DEFINEM (ELIST . BODY) `(MAPCAR ASET ',ELIST (LABELS ,BODY (LIST . ,ELIST)))) but its not clear this wouldn't choke the compiler. Note that while this is an upward funarg, the environment is constant, and so should compile out. That's about the best I can think of. Unfortunately, all the LAMBDA-games to avoid variable clashes without GENSYMs break down somewhat in the face of ASET', for some reason. I tried to write a version that avoided calling LIST and couldn't. SCHEME is certainly not the answer to all the world's problems. You might be amused by this solution, however, which makes use of an extension inspired by a paper by Klaus Berkling on something he calls "lambda-bar" calculus. Let "-LAMBDA" be an environmental operator which has the same syntax as LAMBDA, but has the effect of *cancelling* one outer LAMBDA contour for those variables. (Therefore it is illegal to mention a variable in a -LAMBDA expression unless it is bound in some uncancelled surrounding LAMBDA expression.) It does not evaluate to a function; it just evaluates its body. This is not too hard to add to the SCHEME interpreter--you just add the variable "bindings" to the environment with a "cancel" marker (like an "unbound" marker) as the value, and when you encounter such a marker on lookup, you just keep looking up, balancing cancels and real values like parentheses. The compiler has no problem--it is all resolvable lexically. It is purely a scoping operator, with no run-time overhead when compiled. So: (DEFMACRO DEFINEM (ELIST . BODY) (LET ((TEMPS (DOLIST (IGNORE ELIST) (GENSYM-NOT-IN ELIST)))) `(LABELS ,BODY ((LAMBDA ,TEMPS (-LAMBDA ,ELIST ,@(DOLISTS ((E ELIST) (X TEMPS)) `(ASET' ,E ,X)))) ,ELIST)))) This uses GENSYM-NOT-IN, which generates a symbol not among those in its argument list. This makes me a little more comfortable than plain old GENSYM, since it *guarantees* the absence of variable name conflicts. Or, if you assume that (ASET (-LAMBDA (X) 'X) Y) works (actually, it couldn't, but some appropriate syntax could be found), you can simplify this to: (DEFMACRO DEFINEM (ELIST . BODY) `(LABELS ,BODY ,@(DOLIST (E ELIST) `(ASET (-LAMBDA (,E) ',E) ,E)))) Also, it seems like it might be a win for DEFINE to define the function in an environment with the function name bound to the function. Then a recursive call to the function could jump/call to the subroutine itself, rather than indirecting through the function name's value cell. Actually, I believe that the SCHEME-78 chip actually did leave the function in the environment, so that this sort of trick could be pulled--see the AI Memo on that subject. One problem, of course, is that it defeats the TRACE package, which is the usual problem with "block compilers".  Date: 22 October 1980 1136-EDT (Wednesday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: lisp-forum at MIT-MC Re: SKIPE and SKIPN I thought I invented these kludges, in fact, and I think they were put temporarily into MacLISP as well, but I can't remember. It was my dream to implement the whole PDP-10 so that we could write LISP PROGs that were more readable. In line with this I suggest that GOSUB be renamed PUSHJ.  Date: 22 October 1980 15:58-EDT From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC Re: SKIPE and SKIPN That's right; you first put them into Maclisp, and then someone, probably one of us, put them into the Lisp Machine. As long as we are going to put in PUSHJ, I think JSP would be good, and just think what JSR could do...  Date: 22 October 1980 17:44-EDT From: Robert W. Kerns To: DLW at MIT-MC, LISP-FORUM at MIT-MC, Guy.Steele at CMU-10A Re: JSA/SRA Let's not forget these total winners, either!  Date: 23 October 1980 1223-EDT (Thursday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: lisp-forum at MIT-MC Re: SKIPE and SKIPN... and JSR Let *HERE* stand for a virtual tag occurring just after the current PROG statement. Then I would propose that (JSR FOO) mean (PROGN (SETQ FOO *HERE*) (GO FOO)), that is, the tag FOO which is the target receives the return address as a value, and then you continue execution just after the tag.  Date: 23 October 1980 13:46-EDT From: Daniel L. Weinreb Sender: dlw at CADR17 at MIT-AI To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC Re: PROG tags That sounds like a good way to implement JSR. How about JRA; it is useful inside the source of Maclisp, so we should certainly provide it to the programmer as well. Any ideas? While we are changing things, perhaps we should follow the lead of Pascal, and only allow numbers to be used as PROG tags.  Date: 23 October 1980 14:02-EDT From: George J. Carrette To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC, Guy.Steele at CMU-10A Pascal PROG tags are NOT numbers, which shows how much you know about such things. They are symbols where the only allowed characters are in the set {0,1,2,3,4,5,6,7,8,9} and leading {0} are ignored.  Date: 23 October 1980 15:49-EDT From: Daniel L. Weinreb Sender: dlw at CADR17 at MIT-AI To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Well, we mustn't be incompatible. Let's change the Lisp reader so that when it is reading at the top level of the inside of a PROG form, it recognizes 345 as a symbol. Of course, 345. and 345_2 should still be fixnums, since Pascal doesn't allow these?  Date: 23 Oct 1980 1608-EDT From: JoSH Re: forcibly stuffing features into a form, ie, PROG ramming as long as you are having pseudo-numeric labels, you may as well put in the FORTRAN-style DO statement and numeric IF...  Date: 23 October 1980 19:33-EDT From: J. Noel Chiappa Re: Line numbers Clearly, if we are going to start putting numbers on some lines, we should do it to all of them, for convenience of accessing specific parts of the code with advanced editors. While we're at it, we could remove a lot of the hassles involved with the long-super-hairy-variable- names-that-you-can-never-remember-quite-right and implement only single character variable names. Comments? Noel  Date: 23 October 1980 20:18-EDT From: George J. Carrette To: JNC at MIT-MC cc: LISP-FORUM at MIT-MC Re: Line numbers Date: 23 OCT 1980 1933-EDT From: JNC at MIT-MC (J. Noel Chiappa) To: LISP-FORUM Re: Line numbers Clearly, if we are going to start putting numbers on some lines, we should do it to all of them, for convenience of accessing specific parts of the code with advanced editors. While we're at it, we could remove a lot of the hassles involved with the long-super-hairy-variable- names-that-you-can-never-remember-quite-right and implement only single character variable names. Comments? Noel Yeah, I have some comments, you are being a real ass hole. Somebody brings up an issue dealing with the efficient implementation of a compiled finite state machine in Lisp and the jokers come out of the woods, demonstrating their total lack of understanding of the programming problems being addressed. DLW and GLS were on track when they joked about PDP10-like machine instructions, "it sure would be nice to be able to generate PUSHJ/POPJ in maclisp, other than at function-call boundaries", gosh, that was the whole point. They were not thrown off by my mention of BASIC. Anway, the basic need expressed is no joke. -gjc  Date: 23 October 1980 21:02-EDT From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI Re: Line numbers and such Yes, there is a limit to how long this is funny (although the story about implementing SKIPE and SKIPN was not made up). There is a serious need for internal functions. I just feel that GOSUB would be a cure worse than the disease. Internal functions MUST take arguments and MUST be called the same way regular function are; it MUST be possible to pass them as arguments and return them from functions. Second-class objects are a loss, and GOSUB does not give you objects at all. Can't you implement your own GOSUBs, using a computed GO to return, or something? NCOMPLR would not generate PUSHJs, presumably, but it might do what you want...  Date: 23 October 1980 21:16-EDT From: George J. Carrette To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC I have to admit I would normally use HUNKS and SUBRCALL to make myself first-class function objects, even if I am only going to use them statically. However, *sigh*, I was doing something to live inside a macsyma environment, (ah, so my true loosing stripes are showing), and macsyma can't afford hunk pages just for a couple of my little functions. The non-upward usage of local function would be a win there. (So, I'm resorting to DEF-FOO's which expand into LAP-A-LIST. No kidding).  Date: 24 October 1980 1800-EDT (Friday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: gjc at MIT-MC, lisp-forum at MIT-MC Re: Incompatibility Easier than changing READ to parse numbers as symbols in PROG forms, would be simply to make READ *always* treat numbers as symbols. Then we change the arithmetic functions to accept symbols by eltting them parse the pnames at run time.  Date: 24 October 1980 1807-EDT (Friday) From: Guy.Steele at CMU-10A To: JNC at MIT-MC (J. Noel Chiappa) cc: lisp-forum at MIT-MC Re: Line numbers Right on! But single-letter names are few in number. How about optionally appending a digit? So here is factorial: (DEFUN FACT (N) (PROG (A) 10 (SETQ A 1) 20 (IF (= N 0) (RETURN A)) 30 (SETQ A (* N A)) 40 (SETQ N (- N 1)) 50 (GO 20))) [You think that's funny, hah?! Well, it really is a legal MacLISP program, and it will work!] If you prefer recursion to iteration, then how about: (DEFUN FACT (N) (PROG (A) 10 (SETQ A 1) 20 (IF (= N 0) (RETURN A)) 30 (SETQ N (- N 1)) 40 (GOSUB 20) 50 (SETQ N (+ N 1)) 60 (SETQ A (* N A)) 70 (RETURN A))) [Not legal MacLISP... yet!]  Date: 24 October 1980 1812-EDT (Friday) From: Guy.Steele at CMU-10A To: George J. Carrette cc: lisp-forum at MIT-MC Re: Line numbers and finite-state machines in LISP Actually, I have always found it most convenient to express finite-state machines in LISP by using mutually recursive functions, passing the state around as an argument. This is the main programming-style reason for wanting a tail-recursive implementation. If LABELS is available to package up these functions, so much the better. If the state is too unwielfy to pass as arguments, then PROG is fine, more or less. As for GOSUB, functional variables seem to be an easy way to express that idea. Smart compilers (ha!) can figure them out and use JSP instead of PUSHJ even, though it is indeed a lot of work. But if we lobby for the labnguage feature and it gets used a lot, then eventually the compiler *will* become smart enough... if we wantg it to be that way (a subject for debate). @Date: 5 December 1980 20:53-EST From: George J. Carrette Re: food for hack. (setsyntax #\SP 'splicing #'(lambda () ())) Other than tty prescan lossage, doesn't change the syntax of space.  Date: 5 December 1980 21:34-EST From: Richard Mark Soley To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: food for hack. Date: 5 DEC 1980 2053-EST From: GJC (George J. Carrette) Re: food for hack. (setsyntax #\SP 'splicing #'(lambda () ())) Other than tty prescan lossage, doesn't change the syntax of space. Why in HELL do you want to? hDate: 19 February 1981 18:22-EST From: Earl A. Killian Re: [DWS: LISP info requested] Date: 19 Feb 1981 (Thursday) 0935-PST From: DWS at LLL-MFE Can either of you point me towards references detailing the trickery necessary to implement lisp? Somebody mentioned something a while back about a book called something like "Anatomy of a LISP", but I haven't been able to track it down. I'm interested in getting a lisp running on a Z80 CP/M system. Thanks. -- Dave Smith  Date: 19 Feb 1981 2037-EST From: BRoberts at BBNG (Bruce Roberts) To: dws at LLL-MFE cc: lisp-forum at MIT-MC Re: Z-80 Lisp The book you were looking for is: John Allen "Anatomy of LISP" McGraw-Hill 1978. In additional to his other interests and teaching responsibilities at Stanford, John Allen (JRA@SU-AI) has a company called The LISP Company -- (T . (L . C)) and is very interested in promoting Lisp on small machines. In fact, he has written TLC-Lisp, which runs on a Z-80 under CP/M. This is not a toy implementation. He explicitly acknowledges Maclisp as the ancestor of TLC-Lisp and this is apparent in the choice of functions provided. One finds the generalized DO statement, &keyword arguments to DEFUN, macros, readmacros, catch/throw, LET, and many familiar function names. I would say the most obvious omission to date is any kind vector/array/hunk data structure; but it does have string and character datatypes. No compiler is advertised, but one exists that handles about 60% of the language and produces a factor of 4 speed up. TLC-Lisp has two schemes for overcoming the address space limitation of the Z-80: (1) an "autoload" that brings in lisp objects from a file (either permanently or each time referenced), (2) at least on Cromemco systems, using bank-switching to extend the usable address size in which to store lisp objects to 4*32K. An object takes 4 bytes to store, so one can have 32,000 of them. We have recently acquired a Cromemco Z-2D system and have just begun to develop some software for it (in C and TLC); our initial impressions are very favorable. Support John Allen's efforts if possible; his company is lurching along at this point, and deserves to flourish. The address is: The Lisp Company P.O. Box 487 Redwood Estates, CA 95044 (408) 353-3857  Date: 19 February 1981 22:00-EST From: George J. Carrette To: EAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: [DWS: LISP info requested] Date: 19 February 1981 18:22-EST From: Earl A. Killian To: LISP-FORUM Re: [DWS: LISP info requested] Date: 19 Feb 1981 (Thursday) 0935-PST From: DWS at LLL-MFE Can either of you point me towards references detailing the trickery necessary to implement lisp? Somebody mentioned something a while back about a book called something like "Anatomy of a LISP", but I haven't been able to track it down. I'm interested in getting a lisp running on a Z80 CP/M system. Thanks. -- Dave Smith AI Memo 420, Data Representation in PDP-10 Maclisp by Guy L Steele Jr. A number of other AI memos by GLS and GJS on lisp implementations are also interesting. All these memos have the advantage of being very compact.  Date: 19 Feb 1981 2204-EST From: JWALKER at BBNA To: EAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: [DWS: LISP info requested] The book is Anatomy of Lisp by John Allen at Stanford published ca 1978 by McGraw-Hill. Date: 24 October 1980 03:24-EDT From: Barry Margolin Re: Improving LISP "grossly" We could really make LISP alot faster and more efficient if we didn't allow arguments to be passed to subroutines. Think about all those computrons being used setting up the new bindings of the lambda variables. Global variables are truly the only winning way to pass values to and from a subroutine. -Barmar  Date: 24 October 1980 03:29-EDT From: Alan Bawden To: BARMAR at MIT-MC cc: LISP-FORUM at MIT-MC Re: ENOUGH! Can we please give up this stupid joke? I'm tired of the polution on this mailing-list. We talk about enough real bad ideas without inventing fake ones.  Date: 24 October 1980 13:03-EDT From: George J. Carrette To: BARMAR at MIT-MC cc: LISP-FORUM at MIT-MC Re: Improving LISP "grossly" I assume you were sort of joking with the suggestion. But it sounds like you haven't heard of compilation. The arguments to a function in compiled lisp code are simply pushed onto a stack as they are computed. Far more efficient than the LAMBDA-BINDING (actually SPEC-BINDING), you mention, and also more efficient than using global variables. -gjc Date: 12 DEC 1980 1453-EST From: BAK at MIT-AI (William A. Kornfeld) Re: Point of incompatibility between MacLisp and Lispm lisp The form: ((LAMBDA (A B A) A) 1 2 3) evaluates to 1 in MacLisp and to 3 in Lispm lisp. Date: 1 February 1981 07:53-EST From: Robert W. Kerns To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: DEFMACRO incompatible Date: 28 JAN 1981 0913-EST From: RMS at MIT-AI (Richard M. Stallman) On the Lisp machine, a list as the first argument to DEFMACRO is a function spec saying where to put the macro. In Maclisp, a list as the first argument to DEFMACRO specifies hairy options for how the macro should work. Might I humbly suggest that the MacLisp form is more flexible, and documented, whereas the LISPM one is not? I have no idea as to the relative difficulty of conversion. The LISPM option could be subsumed by a :WHERE option in the MacLisp scheme.  Date: 3 FEB 1981 2349-EST From: Moon at MIT-AI (David A. Moon) To: RWK at MIT-MC cc: RMS at MIT-AI, LISP-FORUM at MIT-AI Re: DEFMACRO incompatible Date: 1 February 1981 07:53-EST From: Robert W. Kerns Date: 28 JAN 1981 0913-EST From: RMS at MIT-AI (Richard M. Stallman) On the Lisp machine, a list as the first argument to DEFMACRO is a function spec saying where to put the macro. In Maclisp, a list as the first argument to DEFMACRO specifies hairy options for how the macro should work. Might I humbly suggest that the MacLisp form is more flexible, and documented, whereas the LISPM one is not? I have no idea as to the relative difficulty of conversion. The LISPM option could be subsumed by a :WHERE option in the MacLisp scheme. It would be unreasonable to change the Lisp machine one, since that would make DEFMACRO incompatible with DEFUN and all other function-defining special forms. Why can't DEFMACRO put its declarations in the body where all other function-defining things put them? I'm not sure in what sense the Maclisp one is documented and the Lisp machine one is not. I know of no published Maclisp documentation that mentions DEFMACRO. If you are talking about on-line files, of course the Lisp machine one is documented.  Date: 4 February 1981 02:43-EST From: Robert W. Kerns To: Moon at MIT-AI cc: RMS at MIT-AI, LISP-FORUM at MIT-AI Re: DEFMACRO incompatible Date: 3 FEB 1981 2349-EST From: Moon at MIT-AI (David A. Moon) It would be unreasonable to change the Lisp machine one, since that would make DEFMACRO incompatible with DEFUN and all other function-defining special forms. Why can't DEFMACRO put its declarations in the body where all other function-defining things put them? Really? DEFMACRO is compatible with MACRO and FSETQ-CAREFULLY and ....? I really don't understand any of this. The options you can supply to DEFMACRO are like the options you can supply to DEFSTRUCT. Declarations, at least as I think of them, are information for the compiler. Are you suggesting that DEFMACRO, if the first form of the body is a DECLARE, should parse that DECLARE? My initial reaction to this is that this is not something it should be doing. Perhaps you can convince me otherwise. I'm not sure in what sense the Maclisp one is documented and the Lisp machine one is not. I know of no published Maclisp documentation that mentions DEFMACRO. If you are talking about on-line files, of course the Lisp machine one is documented. I was being silly. The relevant difference between the LISPM documentation and the MacLisp documentation for DEFMACRO is that I know where the MacLisp documention is. Unfortunately, I didn't realize this until after I sent the message. Sorry.  Date: 4 February 1981 08:35-EST From: Jon L White To: RWK at MIT-MC, MOON at MIT-MC, RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: In search of the compatible DEFMACRO How serious is this "compatibility" argument? Date: 4 February 1981 02:43-EST From: Robert W. Kerns Subject: DEFMACRO incompatible Date: 3 FEB 1981 2349-EST From: Moon at MIT-AI (David A. Moon) Subject: DEFMACRO incompatible It would be unreasonable to change the Lisp machine one, since that would make DEFMACRO incompatible with DEFUN and all other function-defining special forms. Why can't DEFMACRO put its declarations in the body where all other function-defining things put them? I really don't understand any of this. The options you can supply to DEFMACRO are like the options you can supply to DEFSTRUCT. Declarations, at least as I think of them, are information for the compiler. In addition to Bob's comment about parallels with DEFSTRUCT, I might ask what DEFUN ever accepts argument syntax like (DEFmumble FOO (X . Y) (something-to-do)) ?? Well DEFMACRO does. The "name" argument, in this example FOO, can be replaced by a list of name and options; at least it can with DEFSTRUCT, DEFVST, and the NIL/MacLISP DEFMACRO (but not the LISPM DEFMACRO). Is there any reason for this gratuitous incompatibility?  Date: 8 February 1981 03:12-EST From: Kent M. Pitman To: "(FILE [LSPMAI;LISP FORUM])" at MIT-MC Re: [RMS: forwarded] Date: 01/31/81 12:19:20 From: RMS at MIT-AI To: KMP at MIT-AI The DEFMACRO incompatibilities can't be fixed by your suggestion. A function specifier looks like (:METHOD class operation) or (:PROPERTY symbol propname) or (:WITHIN within-function function-to-rename) or ... or (symbol propname) for compatibility with Maclisp DEFUN. In other words, the first symbol in the list is recognized and it causes some number (usually 1 or 2) following elements in the list to have some meaning or other. I don't think it's very easy to fit this in with a plist, unfortunately.  Date: 1 February 1981 00:09-EST From: Kent M. Pitman To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Why don't we introduce LETF which is a LET with the semantics you have suggested. The parallel between SET/LET and SETF/LETF would be quite pleasant.  Date: 1 FEB 1981 0115-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Problem with LETF It would be a loss to create new binding constructs to use SETF- style destructuring because there are already a lot of different binding constructs, with different features, and we would have to double their number. We would have LETF and LETF* and PROGF and PROGF* and DOF and DOF*, not to mention maybe LOOPF and FORF and perhaps DEFUNF. If LET were the only binding form in Lisp, then LETF would be fine, but as things are it is important to use the same existing ones.  Date: 1 February 1981 22:46-EST From: Kent M. Pitman Re: LET, LETF, etc. What do others think? On a system which has so many builtins, I don't buy the ``it would be cluttering up the language to ...'' idea with regard to addition of a useful feature. Features should be added on the basis of their need. Maclisp makes extensive use of destructuring LET. I think you should attempt to be compatible. I don't think having a LETF (and maybe LETF*, though I see no use for DOF, DOF*, PROGF, PROGF*) would be bad. It provides a useful functionality at a low cost and would not be incompatible with existing software. I would be interested in hearing the opinions of other LispM users. In particular, who favors and who opposes destructuring LET. If it is opposed for some other reason than because there is a split between people who like RMS's kind of destructuring and the Maclisp style, what is it? If that is the only problem, who besides RMS thinks that something like LETF would be a bad idea and why? Thanks. -kmp  Date: 2 February 1981 02:56-EST From: Daniel L. Weinreb Re: destructuring It would indeed be horrible to have to introduce PROGF, PROGF*, et al. I think that when you want to introduce new functionality, you should have new functions (or special forms), rather than changing lots of things in the existing world to support the new functionality as an "added feature". Destructuring is new functionality; there should be a new special form that does it. This is far more modular and clean than having to modify a large number of Lisp special forms to have yet another feature. Modifying all lambda-binding special forms to have a feature that has nothing to do with lambda-binding is what I object to. CDate: 15 December 1980 23:59-EST From: George J. Carrette Re: Is LEXPR-FUNCALL a pain? I've noticed that many of my calls to LEXPR-FUNCALL are with the first argument quoted, e.g. (LEXPR-FUNCALL #'FORMAT *MESSAGE-STREAM* STRING LIST) whereas one would never write (hardly ever) (FUNCALL #'FOO X Y) because one writes (FOO X Y). Q: Could it be possible to use the syntax? ( ARG ARG . VECTOR-OR-LIST-ARGS)  Date: 16 December 1980 00:18-EST From: Mike McMahon Sender: MMcM at CADR6 at MIT-AI To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: Is LEXPR-FUNCALL a pain? Date: 15 DEC 1980 2359-EST From: GJC at MIT-MC (George J. Carrette) Q: Could it be possible to use the syntax? ( ARG ARG . VECTOR-OR-LIST-ARGS) Suppose VECTOR-OR-LIST-ARGS is a form to be evaluated?  Date: 16 December 1980 00:26-EST From: George J. Carrette To: MMcM at MIT-AI cc: LISP-FORUM at MIT-MC Re: Is LEXPR-FUNCALL a pain? Date: 16 December 1980 00:18-EST From: Mike McMahon Date: 15 DEC 1980 2359-EST From: GJC at MIT-MC (George J. Carrette) Q: Could it be possible to use the syntax? ( ARG ARG . VECTOR-OR-LIST-ARGS) Suppose VECTOR-OR-LIST-ARGS is a form to be evaluated? Sigh...  Date: 16 December 1980 1219-EST (Tuesday) From: Guy.Steele at CMU-10A To: GJC at MIT-MC (George J. Carrette) cc: lisp-forum at MIT-MC Re: Is LEXPR-FUNCALL a pain? Your proposed syntax is not general. Suppose I wanted to write: (FORMAT X Y . (CAR FOO)) Well, that's the same as (FORMAT X Y CAR FOO), so I would instead have to do (LET ((Z (CAR FOO))) (FORMAT X Y . Z)). In general, we ought to avoid syntaxes that take an evaluated thing but don't allow all valid expressions. We learned that lesson with (STATUS SYNTAX ...). SDate: 11 November 1980 13:39-EST From: Jon L White To: deutsch at PARC-MAXC cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC, SHRAGE at WHARTON-10 Re: Iteration Facilities Indeed MAPF is a very modest attempt to abstract out the "MAP" type facilities. the LOOP facility is the more general user-oriented approach, and it is currently being distributed with MacLISP. LOOP grew out of several years of evolving iteration facilities, heavily influenced by InterLISP's; one very interesting aspect about it is that the source code works well in all of PDP10 MacLISP, MULTICS MacLISP, the LISPMachine, and NIL. Glenn Burke and Dave Moon have a printed manual for using it, but the machine-source for that manual is itself quite readable; see the file LIBDOC;LOOP DOC on any ITS machine.  Date: 11 NOV 1980 1353-EST From: GSB at MIT-ML (Glenn S. Burke) To: deutsch at PARC-MAXC cc: LISP-FORUM at MIT-ML Re: iteration facilities Loop, in case anyone is interested, is being brought up in MDL. I don't know the current state of this effort.  Date: 17 November 1980 1642-EST (Monday) From: Guy.Steele at CMU-10A To: gsb at MIT-ML, moon at MIT-MC cc: lisp-forum at MIT-MC Re: LOOP Macro a Winner  Date: 17 November 1980 1938-EST (Monday) From: Guy.Steele at CMU-10A To: gsb at MIT-ML, moon at MIT-MC cc: lisp-forum at MIT-MC Re: LOOP Macro a Winner I haven't much liked LOOP macros in the past, but I think I endorse this one (for whatever that is worth). A few points: (a) There is a "dangling AND" problem, similar to the "dangling ELSE" problem with IF-THEN-ELSE. Suppose I write: (LOOP FOR I IN LIST-OF-INTEGERS WHEN (ODDP I) DO (FORMAT T "~%~D is odd" I) AND WHEN (ZEROP (REMAINDER I 3)) DO (FORMAT T " and divisible by 3") AND (FORMAT T ".")) That is, I want to print a message if I is odd, and add something to the message if also divisible by 3. The message should always end in a period. However, the above evidently gets parsed as: (LOOP FOR I IN LIST-OF-INTEGERS WHEN (ODDP I) DO (FORMAT T "~%~D is odd" I) AND WHEN (ZEROP (REMAINDER I 3)) DO (FORMAT T " and divisible by 3") AND (FORMAT T ".")) That is, it resolves the ambiguity the same way dangling ELSEs usually are: the AND is attached to the *innermost* eligible construct. Unfortunately, LOOP doesn't provide BEGIN-END blocks or FI for resolving the other way. Would it be hard to let a FI or NEHW or SSELNU or ENDIF or something terminate the innermost conditional? Then I could write: (LOOP FOR I IN LIST-OF-INTEGERS WHEN (ODDP I) DO (FORMAT T "~%~D is odd" I) AND WHEN (ZEROP (REMAINDER I 3)) DO (FORMAT T " and divisible by 3") NEHW AND (FORMAT T ".")) and expect it to work as desired. (Actually, would it be hard to add an ELSE as well? (LOOP ... IF x ... ELSE ...) would be about the same as (LOOP ... IF x ... UNLESS x ...) except for not evaluating x twice. You could allow THEN to be a gratuitous buzzword. But I won't be disappointed if you turn this down: that way lies madness (or at least PL/I).) (b) I propose that a form NAMED be added for naming the loop for use with RETURN-FROM: (LOOP NAMED SUE FOR HORRIBLE-THING IN CAVE ... (LOOP ... (RETURN-FROM SUE ...) ...) ...) (c) How about an easy way to declare new kinds of collecting LOOP forms? (That way I don't have to bug you to add my favorites.) Examples: (DEFINE-LOOP-COLLECTOR (NRECONC NRECONCING) NRECONC () T) (DEFINE-LOOP-COLLECTOR (MULTIPLY MULTIPLYING) TIMES 1 T) (DEFINE-LOOP-COLLECTOR (MAXIMIZE MAXIMIZING MAXING) MAX 0 ()) (DEFINE-LOOP-COLLECTOR (COLLECT-UNIQUE COLLECTING-UNIQUE) CONS-UNIQUE () T) where (DEFUN CONS-UNIQUE (X Y) (IF (MEMQ X Y) Y (CONS X Y))) Here we have the general form (DEFINE-LOOP-COLLECTOR ) where and are obvious, is the value if zero things are collected, and is a truth value: non-() means the value after collecting one thing is the result of applying to the thing and the , while () means the value after collecting just one thing is the thing itself. Maybe this isn't general enough, but you get the idea of what I want. CONSING would be like COLLECTING but would produce a reversed list. UNIONING and INTERSECTING would also be useful. (I realize that one can get those by writing some parentheses, but...)  Date: 18 November 1980 12:34-EST From: Carl W. Hoffman To: ALAN at MIT-MC, Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: LOOP Date: 17 November 1980 1938-EST (Monday) From: Guy.Steele at CMU-10A (a) There is a "dangling AND" problem, similar to the "dangling ELSE" problem with IF-THEN-ELSE. Suppose I write: (LOOP FOR I IN LIST-OF-INTEGERS WHEN (ODDP I) DO (FORMAT T "~%~D is odd" I) AND WHEN (ZEROP (REMAINDER I 3)) DO (FORMAT T " and divisible by 3") AND (FORMAT T ".")) There are certain features I like about LOOP, but the ability to write code which looks like the above is not one of them. That is, I don't care for this attempt to create an algebraic-syntax language within Lisp. We can just as easily (and more clearly, in my Aunt Agathoid opinion) write (LOOP FOR I IN LIST-OF-INTEGERS DO (WHEN (ODDP I) (FORMAT T "~%~D is odd" I) (WHEN (ZEROP (REMAINDER I 3)) (FORMAT T " and divisible by 3")) (FORMAT T "."))) where (WHEN X . Y) is an abbreviation for (COND (X . Y)). I classify LOOP's features into three catagories: (1) The ease with which one can construct "data sources", as in (LOOP FOR X IN LIST-1 FOR Y IN LIST-2 FOR I FROM 1 TO N DO ...) where we don't have to worry about cdr'ing, car'ing, incrementing, and end-testing by hand. (2) The ease with which one can construct "data sinks", like COLLECT, APPEND, MAXIMIZE, SUM, COUNT, and THEREIS. (3) Various "control" features, like ALWAYS, NEVER, and FINALLY. It is not the LOOP keywords I object to. They seem essential for anything of this complexity, and we seem to be moving in that direction anyway. It is their infix nature. I would prefer to see a syntax of the form (LOOP (IN-LIST X LIST-1) (IN-LIST Y LIST-2) (INCREMENT I 1 N) (ALWAYS (> Y 3)) (DO ...) (FINALLY ...)) Other keywords might be IN-VECTOR, ON-LIST, etc. [along the lines of what LOOP provides], and a keyword like AND for parallel assignment, e.g. (LOOP (AND (REPEAT CHAR (TYI)) (REPEAT OLD-CHAR NIL CHAR)) (DO ...)) Where the REPEAT keyword is meant to describe something like LOOP's FOR X = ... THEN ... The problem arises when we try to implement the class of features mentioned under (2). Correct me if I'm wrong, but it appears that the only reason the LOOP keyword WHEN is needed [as opposed to writing DO (COND ...)] is in conjunction with the "data source" facilities [COLLECT, COLLECT-INTO, MAXIMIZE, SUM, THEREIS]. We want forms like (COLLECT X) to appear arbitrarily deep in nested CONDS, which may contain vanilla Lisp forms as well. Unfortunately, COLLECT is not an isolated special form, but a part of LOOP's syntax. I don't see a simple solution to this, other than doing a hairy analysis/macro-expansion of code at macro-expand time, as is done by MC:ALAN;COMLET.  Date: 18 November 1980 17:54-EST From: Kent M. Pitman Re: LOOP loses I side with those that feel LOOP is a big mistake to put into Lisp in its current form. CWH's arguments about the parenthesizing are a valid criticism of LOOP in their own right. I agree with him completely and have little to add on that area. I will try here confine myself to gripes of a different nature -- to do with semantics, rather than syntax ... The day may come in which we can program in English and be understood, but it's not here yet. I believe LOOP unfairly deludes a person into thinking that if it reads well in English, it's going to run. LOOP is a game between the programmer and the LOOP macro code. Sometimes it gets things right. If you're lucky, you even have a good enough model of what's going on to figure out when it'll guess right and when it won't. You're lucky you have that good a model ... What about those poor people who don't have such a good model and don't know what LOOP will win the game on and what cases it will bomb on. I somehow feel you're doing them a dis- service by introducing such a DWIM-ish hack into Maclisp and I wish you'd leave loop as a cute package on LIBLSP; that people can load iff they are willing to take the responsibility for what it does to them. Facilities with the power of LOOP certainly need to be devised, but they oughtn't be `endorsed' (which is my opinion of what AUTOLOAD'ing is) until people are happier with their syntax and semantics. I am satisfied with neither. Here are some samples of the sort of things that bother me about it. Some (maybe most) are clearly bugs in the particular implementation, but I include them anyway because I think that the fact that certain classes of bugs arise at all gives useful insight into the nature of the beast we are playing with ... Indeed, some only serve to characterize a larger problem which I couldn't begin to exhaustively list all of the obscure manifestations of... [a] It requires a hairy parser, and its error messages are correspondingly heuristic and confusing. (LOOP FOR X TO 3 FROM 4 DO (PRINT X)) ;TO unknown keyword in FOR/AS clause ;BKPT *RSET-TRAP (LOOP FOR X FROM 3 TO 4 DO PRINT X) ;X unknown keyword in LOOP ;BKPT *RSET-TRAP Here (LOOP FOR X FROM 3 TO 4 DO (PRINT X)) was desired. The error diagnostic is way offbase. [And by the way, implementors, that shouldn't be a *RSET break. Use a real error channel, please? Thanks.] (LOOP FROM 3 TO 5 DO 3) ;FROM unknown keyword in LOOP This error message is totally confusing. FROM *is* a LOOP keyword, it's just not allowed in LOOP-toplevel context. I guess my feeling is that what the user saves in being able to write his code in this concise way, he'll lose back partly in trying to decipher the error messages, since they are considerably more obscure than the average Lispy error message. [b] Things which read right in English don't work. (LOOP FOR X FROM 3 TO 5 DO COLLECT X) ;X unknown keyword in LOOP (LOOP FOR X FROM 3 TO 5 DO (COLLECT X)) ;COLLECT UNDEFINED FUNCTION OBJECT Oops. Here I forgot that the English is only a magic illusion that works in certain places. Having to remember NOT to use parentheses may teach people bad habits about the rest of Lisp and will encourage a poor programming style if they ever code their own macros. [c] Two phrases which have equivalent English meanings and which are both parsed by LOOP, do different things: (LOOP FOR X FROM 3 TO 5 COLLECTING X) (3 4 5) (LOOP COLLECTING X FOR X FROM 3 TO 5) (3 4 5 6) Most proponents of LOOP I have spoken with cite its abilities to insulate you from common fencepost erros as a feature. I'd rather not be deluded into thinking I am safe if I'm really not. [d] It doesn't constrain the way you order things sufficiently to avoid all sorts of totally obscure interpretations: (LOOP FOR X FROM 3 TO 5 COLLECT X FOR X FROM 3 TO 5) (3 5) This sort of bug could easily arise from having mistyped a variable name. Loop neither warns you that the interaction between the two X steppers will be terrible, nor does it attempt to make sense of the interaction. [e] Left-to-right order of evaluation is played with by LOOP in unpredictable and dangerous ways: (LOOP FOR X FROM (PROGN (PRINT 'FOO) 3) TO 5 DO (PRINT X) FOR Y FROM (PROGN (PRINT 'BAR) X) TO 3 DO (PRINT Y)) FOO BAR 3 3 4 NIL Left-to-right evaluation of this form would demand something different. The 3 should print before the BAR. I think LOOP is too unconstrained and leaves itself open to unpredictability. I don't like guessing at what it's going to do or trying to carry a model of its model of me. If it can't understand what I mean in the way I say it, it should just complain. (LOOP FOR X FROM 3 TO 5 COLLECT X FOR X = (1- X)) ;NIL NON-NUMERIC VALUE ;BKPT WRNG-TYPE-ARG ; X is bound to NIL. It died on the GREATERP check as seen here: (LET ((X 3)) (DECLARE (FIXNUM X)) (LET (X) (PROG (G0010 G0009) (LOOP-COLLECT-INIT G0009 G0010) NEXT-LOOP (AND (GREATERP X 5) (GO END-LOOP)) (RPLACD G0010 (SETQ G0010 (NCONS X))) (SETQ X (1- X)) (SETQ X (1+ X)) (GO NEXT-LOOP) END-LOOP (RETURN G0009)))) This occurs do to bad interaction in the two X steppers. Probably such things should be invalid. Here's another case where order of evaluation and dual steppers for the same variable are allowed but do something some might find unintuitive at best: (LOOP FOR X FROM 4 TO 100 UNTIL (NOT (ODDP X)) FOR X FROM 3 TO 100 DO (PRINT X)) 3 5 7 ...etc... 75 77 NIL I'll leave it to you to hypothesize (or check) what this one expands into. [f] This one doesn't really count as a bug -- not much anyway; it can occur in all kinds of code. It just looks especially bad in the LOOP formalism because the usual Lisp parentheses divisions are missing ... (SETQ TO 3 DO 5) 5 (LOOP FOR FROM FROM TO TO DO DO (PRINT FROM)) 3 4 5 NIL  Date: 11 December 1980 18:07-EST From: George J. Carrette Re: More LOOP fan mail. A macsyma system programmer used loop in an includef file: (LOOP FOR X IN PRELUDE-FILES COLLECT X COLLECT (GET X 'VERSION)) However, we made him remove it to save us the slowdown of the loading of loop. He grudingly replaced it with this: (DO ((X PRELUDE-FILES (CDR X)) (RESULT NIL)) ((NULL X) (NREVERSE RESULT)) (PUSH (CAR X) RESULT) (PUSH (GET (CAR X) 'VERSION) RESULT)) Ok, uglier than the loop, maybe. But there is a nice mapping way to do it: (MAPCAN #'(LAMBDA (X) `(,X ,(GET X 'VERSION))) PRELUDE-FILES) Here are the results from compiling these three examples: (in GJC;LOOPT >) Example | code size (pdp10). ------------------------------- LOOP 29. DO 23. MAP 11. including SUBR generated from the LAMBDA. MAP 27. with MAPEX T to give open compilation. The LOOP macro produces almost three times as much machine code as the map in this example. Observations? Using LOOP avoided using back-quote. GLS: Doesn't an optimizing compiler have a tougher time with a PROG construct and GO-TO's than with a regularized mapping construct? Maybe we need a shorter name for lambda, so people won't be afraid to type it? How do the proposed NIL mapping extensions compare with LOOP? -gjc  Date: 11 DEC 1980 1856-EST From: GSB at MIT-ML (Glenn S. Burke) To: gjc at MIT-MC cc: LISP-FORUM at MIT-ML Re: LOOP fan mail You missed one significant point and one example. You failed to count the space taken by the breakoff function: pname: 6 words (3 fixnum, 3 list), although this could be longer or shorter depending on the genprefix. symbol: 3 words (1 header, 2 sy block) plist: 2 words and one cons cell to intern it. That makes the space comparison 23. words, not 11. 2 of those words will not be shared (symbol header, and cons cell in the obarray). In any case, a more appropriate comparison with the open-coded map is: (LOOP FOR X IN PRELUDE-FILES NCONC (LIST X (GET X 'VERSION))) which happens to take 26. words. That is because there is a special optimization for an NCONC of a call to LIST; fooling it by using (LIST* X (GET X 'VERSION) NIL) takes 27. words, same as the open-compiled MAP, which i believe it emulates fairly closely.  Date: 11 December 1980 19:31-EST From: George J. Carrette Re: more loop fan mail I didn't count the PNAME, SYMBOL & PLIST because there is no reason they need be generated. This is just implementation lossage in maclisp, which RLB has a fix for (if someone can justify the time to finish the project). Mainly I wanted to use codesize as a measure of elegance and expressive power. I did count the machine-code taken by the breakoff function. Even if one does count the symbol space used it is not a fair comparison, since it buys you a lot, the ability to TRACE, and even redefine the function, not something you get in a LOOP. A point not to be overlooked is the fact that with the MAP it is very natural to have these condiderations decided at compile-time, (or assemble/fasload/link/runtime) whereas with loop you make an irreversable decision at programming-time. -gjc  Date: 11 DEC 1980 1938-EST From: EB at MIT-AI (Edward Barton) To: LISP-FORUM at MIT-AI, GJC at MIT-MC Re: more loop fan mail Why should anyone believe that generated codesize is a measure of elegance and expressive power?  Date: 11 December 1980 21:11-EST From: Glenn S. Burke To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: even more loop fan mail "Just implementation lossage"? That would seem to include all "lossage" other than design lossage. It seems kind of ridiculous to compare something against hypothetical features. (Do you happen to know how many calls to lsubrs, with one or more args, there are in macsyma? It's the same as the number of words of BPS which would be saved by a tweak in the compiler.) The non-open-compiled map buys you little except space. The pdp-10 compiler will always open code in spite of the setting of MAPEX if you are iterating over more than one list or if there are references in the lambda-expression to variables local to the containing code. The ability to trace or redefine such a thing is thus very undependable and in any event subject to the constraints of the particular lisp implementation (you need a handle on that name). And of course breaking off the function may keep the compiler from performing optimizations, if for no other reason than the constraint imposed by argument/value passing conventions.  Date: 12 December 1980 0122-EST (Friday) From: Guy.Steele at CMU-10A To: GJC at MIT-MC (George J. Carrette) cc: lisp-forum at MIT-MC Re: More LOOP fan mail. Regarding your questions to me: (1) An optimizing compiler might have an easier time of dealing with a mapping construct, or with a LOOP construct, or even a DO (!) than a PROG. However, COMPLR reduces all three to a PROG before doing its thing, so any differences in code size are attributable to odd differences in the constructions of the PROG. (2) I used to have a macor called FN, in the days before #', such that (FN (X) body) => (FUNCTION (LAMBDA (X) body)), and I found it useful. (3) Maybe JONL will comment on MAPF versus LOOP.  Date: 12 December 1980 09:42-EST From: Jon L White To: LISP-FORUM at MIT-MC cc: GJC at MIT-MC, eb at MIT-AI, deutsch at PARC-MAXC2 Re: LOOP usage, versus "best-optimizeable-codings" The value of iteration optimization, beyond the obvious things, must be low; for otherwise we'd never let MacLISP get away with turning DOs and LOOPs into PROGs. (not to say that a modest amount of optimization isn't already there -- witness the obscure GOFOO marker in COMPLR for saving one cons cell per call to MAPCAR). GLS's note is very telling about this: Date: 12 December 1980 0122-EST (Friday) From: Guy.Steele at CMU-10A (1) An optimizing compiler might have an easier time of dealing with a mapping construct, or with a LOOP construct, or even a DO (!) than a PROG. However, COMPLR reduces all three to a PROG before doing its thing, so any differences in code size are attributable to odd differences in the constructions of the PROG. At one time, the NIL special form MAPF was intended to express the union of all the "maping" functions we know of, but we had no particular plans to make its compilation much more efficient than MAPC or DO. The note from Deustch to LISP-FORUM some time ago shows that the higher-level abstraction of iteration concepts is "feature" whose time has come. Also, as Barton points out: Date: 11 DEC 1980 1938-EST From: EB at MIT-AI (Edward Barton) Why should anyone believe that generated codesize is a measure of elegance and expressive power? Beyond all other considerations, as is usually the case when one goes hunting the elusive "compiler optimization snark", it is very easy to be misled down a fruitless path, as this note from GSB shows: Date: 11 DEC 1980 1856-EST From: GSB at MIT-ML (Glenn S. Burke) You failed to count the space taken by the breakoff function: pname: 6 words; symbol: 3 words; plist: 2 words and one cons cell to intern it. That makes the space comparison 23. words, not 11. . . . In any case, a more appropriate comparison with the open-coded map is: (LOOP FOR X IN PRELUDE-FILES NCONC (LIST X (GET X 'VERSION))) which happens to take 26. words. Date: 24 Nov 1980 15:33:26-PST From: CSVAX.fateman at Berkeley To: lisp-forum at mit-mc cc: CSVAX.fateman at Berkeley Re: macro expansion Dan Friedman at Indiana (off arpa net) called me to find out if Franz could be made to do the following: (dm a (l) l) ;; dm defines a macro.. (a 1) ==> infinite loop. right now it produces a stack overflow. Clearly it could be done by a "goto" in the evaluator. Comments? Would such a feature be useful in various other lisp incarnations? Would such a feature hurt?  Date: 24 November 1980 19:41-EST From: Alan Bawden To: CSVAX.fateman at BERKELEY cc: LISP-FORUM at MIT-MC Re: macro expansion Date: 24 Nov 1980 15:33:26-PST From: CSVAX.fateman at Berkeley Dan Friedman at Indiana (off arpa net) called me to find out if Franz could be made to do the following: (dm a (l) l) ;; dm defines a macro.. (a 1) ==> infinite loop. right now it produces a stack overflow. Clearly it could be done by a "goto" in the evaluator. Comments? Would such a feature be useful in various other lisp incarnations? Would such a feature hurt? ??? What exactly is the "feature" you are talking about? You prefer an infinite loop to a stack overflow? How is that a feature? Can you give us a better example perhaps?  Date: 24 Nov 1980 16:51:42-PST From: CSVAX.fateman at Berkeley To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC, CSVAX.fateman at Berkeley Re: macro expansion Friedman would like to use this notion of macro expansion for implementing control structures (like "while true do ...", or "and" of a very large number of objects. ). I believe it could also be used in conjunction with partial evaluation schemes. Yes, an infinite loop is to be preferred to a stack overflow, since the latter reveals the machine dependence of the implementation unnecessarily. (Friedman seems to prefer it for other reasons too.)  Date: 24 November 1980 20:15-EST From: Alan Bawden To: CSVAX.fateman at BERKELEY cc: LISP-FORUM at MIT-MC Re: macro expansion Date: 24 Nov 1980 16:50:54-PST From: CSVAX.fateman at Berkeley Friedman would like to use this notion of macro expansion for implementing control structures (like "while true do ...", or "and" of a very large number of objects. ). I believe it could also be used in conjunction with partial evaluation schemes. Yes, an infinite loop is to be preferred to a stack overflow, since the latter reveals the machine dependence of the implementation unnecessarily. (Friedman seems to prefer it for other reasons too.) I still would like to see an example. I suspect that the application constututes a mis-use of the macro facility, but I would like to see an example before passing such a judgement. (My suspicions are especially aroused by the fact that no correct usage of macros that I can imagine is recursive enough to upset a non-tail-recursive macro-expander.) Test to determine if something is mis-using macros: Can you compile a program that uses these macros? If the answer to this question is "no", or even just "maybe" or "sometimes", then macros are being mis-used.  Date: 24 Nov 1980 21:02:47-PST From: CSVAX.fateman at Berkeley To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC, CSVAX.fateman at Berkeley Re: macro expansion Why do you say the macros are being mis-used? If there are no incompatibilities when the expansion converges, why not give it a better meaning when the expansion does not converge? I asked Friedman about compilation, and he said he is not interested in that... As for a clearly USEFUL example, I cannot provide one off the top of my head, but you might look at some of Friedman's papers, perhaps. I have no particular stake in this personally.  Date: 25 November 1980 01:15-EST From: Alan Bawden To: CSVAX.fateman at BERKELEY cc: LISP-FORUM at MIT-MC Re: macro expansion Date: 24 Nov 1980 21:02:02-PST From: CSVAX.fateman at Berkeley Why do you say the macros are being mis-used? If there are no incompatibilities when the expansion converges, why not give it a better meaning when the expansion does not converge? Well I heven't yet accused anyone of mis-using anything since I am still trying to figure out just what we are talking about. I am not sure I can imagine a reasonable interpretation for a divergent macro expansion other than as an error. In fact I think I prefer that it get a PDL overflow so that I can find out that I am losing right away, rather than waiting untill I grow suspicious waiting for my program finish. (It also helps in debugging if macros are expanded recursively; you can poke back up the stack and look at the original code before the macro expanded it into a DO with 69 GENSYMed labels...) I asked Friedman about compilation, and he said he is not interested in that... As for a clearly USEFUL example, I cannot provide one off the top of my head, but you might look at some of Friedman's papers, perhaps. I have no particular stake in this personally. Anyone who doesn't care about compilation, and who also wants some peculiar features in his interpreter, is a good candidate for writing his own interpreter! My misgivings about munging with the way that macros are exapanded only extend as far as real-world Lisp programmers are forced to live with the result. I am as interested as anyone in interpreters with unusual and/or theoretically interesting properties. I have lost count of the interpreters I have written just to see what it is like to type to a read-eval-print loop with X strange property. Writing a Lisp interpreter in Lisp is an easy exercise that practically anyone can do (after all, someone else has already done the really hard part: written all the subrs!).  Date: 25 November 1980 1218-est From: Bernard S. Greenberg Re: Macros I have been following this over CRD's shoulder (quite literally, and to his consternation). Having had exactly this debate with Friedman while at the Lisp conference, I have some idea what is going on here.. It is possible to implement "reduction languages" via meta-rules of the form "see if this thing is reducible, if so reduce it, and keep doing this until its not". These things are really interesting. I have a toy implementation (dating from 1973 when I heard Backus lecture on the subject) in MacLisp on Multics, if anybody cares, of Backus' "RED1". I \believe/ this is the kind of thing that Friedman is hinting at: Lisp macro processing is exactly this kind of language. From my viewpoint, attempting to implement such a hack via Lisp macros is "cute", but kind of like Johnson's old saw about teaching bears to dance; If I did such a thing, I might show, say, ALAN, but not publicize it or let anyone else know I had done it. It is clearly WRONG, a gross misuse of Lisp macros and their features, and in terms of showing the power of anything, something that better not be shown.  Date: 2 December 1980 1025-EST (Tuesday) From: Guy.Steele at CMU-10A To: Alan Bawden cc: lisp-forum at MIT-MC Re: macro expansion Clearly what Friedman is talking about is something like tail-recursion for macro-expansions. The idea is that the macro result really should be evaluated "in place of" the macro call, rather than with a net level of stack. There's probably a correlation between macro-tail-recursion and whether or not the language in which the interpreter was implemented has ordinary tail-recursion. yDate: 10 NOV 1980 1704-EST From: Rich at MIT-AI (Charles Rich) Sender: GLR at MIT-AI To: (BUG LISPM) at MIT-AI, (BUG LISP) at MIT-AI, LISP-FORUM at MIT-AI cc: KMP at MIT-AI, RICH at MIT-AI Re: Using macros in Mapping functions I just finished reading Ken Pitman's Lisp Conference article on macros and Fexpr's in which he points out that a difference between macros and Fexpr's is that you can apply Fexpr's, which you cannot do so with macros. This is indeed a significant point. I have been irritated for a long time by having to write forms like the following (MAPCAR #'(LAMBDA (X) (SECOND X)) L) when I would much rather write (MAPCAR #'SECOND L) I would like to ask now why the second form above cannot be taken as a shorthand for the first (using a gensym for the variables)? It seems unreasonable to force the user to write the first form or, even worse, (MAPCAR #'CADR L) if the interpreter and compiler could systematically accept the shorter form as a valid abbreviation for the longer form. If I am not missing some obvious (or subtle) reason why Lisp can't support macros used in this way, may I propose this as an innovation for future Lisp's. yt, Chuck Rich.  Date: 10 November 1980 20:05-EST From: Earl A. Killian To: Rich at MIT-AI cc: LISP-FORUM at MIT-MC Re: Using macros in Mapping functions [Note: I believe it is considered impolite to send messages to LISP-FORUM and also to BUG-LISPx too.] I don't believe that you want Fexprs so much as open coded functions. The example you choose (SECOND), should not really be implemented by macro, but rather as an open coded function; that would make it work for MAPing efficienctly in both the compiler and interpreter. Some things that are done with macros can't be done with open coded functions (e.g. SETF), but that's ok, as it isn't clear what it means to apply such a thing.  Date: 10 November 1980 21:37-EST From: David A. Moon To: Rich at MIT-AI cc: LISP-FORUM at MIT-MC Re: Using macros in Mapping functions Let me simply point out that SECOND has been an open-codable function for some time, precisely so that people could do what you are asking to be able to do.  Date: 11 NOV 1980 0237-EST From: RMS at MIT-AI (Richard M. Stallman) On the Lisp machine, it is possible to apply a macro. In addition, there are DEFSUBST functions which you can call normally or macroexpand.  Date: 11 NOV 1980 0825-EST From: RICH at MIT-AI (Charles Rich) To: MOON5 at MIT-MC cc: LISP-FORUM at MIT-AI Re: Using macros in Mapping functions Date: 10 NOV 1980 2137-EST From: MOON5 at MIT-MC (David A. Moon) Let me simply point out that SECOND has been an open-codable function for some time, precisely so that people could do what you are asking to be able to do. Oh, I guess that's nice for SECOND (I hope the manual will be brought up to date), but it doesn't help for user defined macros. I just gave SECOND as an example that people would recognize. This usually happens with my own macros -- then I am tempted to turn them back into functions, which seems ought to be unnecessary Besides, it feels to me like this issue of open-codable functions (EAK made the same point) to me is only one dimension (see KMP's article) of why one uses macros.  Date: 11 November 1980 12:06-EST From: Jon L White To: Rich at MIT-AI cc: LISP-FORUM at MIT-MC Re: Applicable Macros? Re the controversy spurred by your note: Date: 10 November 1980 17:04-EST From: Charles Rich Sender: GLR at MIT-AI Subject: Using macros in Mapping functions I might mention as RMS did that some "cleaner" version of DEFSUBST is what many users (you included) have apparently wanted, but been lacking. The Berkeley crowd diddled their macro definer to get some minimal capability (was it "&SAFE"?), and we've had DEFSIMPLEMAC for some time in MacLISP (see current LISP;LISP RECENT). But GJC, CWH, KMP and I have discussed what would be needed for a true DEFOPEN, and RWK is coding a module which would be a first step therein -- a "code walker" which gives a generalized facility for examining and "walking over" a piece of lisp code. When DEFOPEN is available, it would permit one to write EXPR's which would be "in-lined" or "integrated" when compiled (probably with some heuristic, yet fail-safe, decisions as to when to "in-line" and when to "close-compile").  Date: 11 November 1980 1457-EST (Tuesday) From: Guy.Steele at CMU-10A Re: Forgot to CC this - - - - Begin forwarded message - - - - Date: 11 November 1980 1456-EST (Tuesday) From: Guy.Steele at CMU-10A To: RICH at MIT-AI (Charles Rich) Subject: Re: Using macros in Mapping functions While it is true that one uses macros for more than defining simple open-coded functions, if one is using a macro for more than just this, it probably doesn't make sense to MAP that macro. Indeed, that's the problem: the mapping function can't tell whether the macro is doing that simple job or something more complicated. The DEFSUBST mechanism provides a way to say that a macro-like thing in fact is restricted to this interesting special case. - - - - End forwarded message  Date: 11 November 1980 18:13-EST From: Kent M. Pitman We should like the implementation of our subprocedures to be reasonably transparent. In large systems (eg, Macsyma) there may be a large number of function-like programs which are in fact implemented as macros. Having to remember which are subrlike macros and which are true subrs (and therefore MAPable) detracts from the elegance of the system. The coding style resulting from such an environment is one which encourages having two functions around which do the same thing but each of which has [seemingly] arbitrary conditions on when it can be used. The `obvious' reason why we can't APPLY or MAP a macro is that its functional component expects to operate on a pointer to the whole macro form. eg, SECOND's definition acts on (SECOND x) as a unit. (MAPCAR #'SECOND x) doesn't work because there is no whole macro form anywhere. If SECOND is a displacing macro, it may want to displace that toplevel cons with (CADR x). The trouble in the MAPCAR case is that there is no toplevel cons to displace. The problem is that this obvious reason is an artifact of the evaluation method chosen and has nothing to do with high-level issues like getting problems solved. It is, as RICH points out, frustrating to have to worry about such trivial details when you are trying to solve some real problem. DEFSUBST is the wrong answer because its semantics are ugly and weak. Program-writing programs cannot reliably generate DEFSUBST forms because of the problems of variable naming conflicts (nested lambdas with similar names, quoted symbols, etc). A real effort needs to be made toward having a general purpose code-walker/meta-evaluator which can correctly analyze the code to be SUBSTed for. If we could make (DEFOPEN FOO (X) (LIST 'X X ((LAMBDA (X) (LIST 'X X)) X))) mean (FOO A) => (LIST 'X A ((LAMBDA (X) (LIST 'X X)) A)) we'd be in much better shape than we are now. That would allow us to define real open-coding techniques. We just don't have such support yet. As an interim solution, we might want to look into what it takes to make something like a (DECLARE (SUBRLIKEMACRO (name number-of-`args') ...)) work so that #' could have help in deciding what to do. The compiler could in theory turn (DEFMACRO FOO (X) `(CDADR ,X)) (DECLARE (SUBRLIKEMACRO (FOO 1))) (DEFUN BAR (X) (MAPCAR #'FOO X)) (DEFUN BAZ (L) (APPLY #'FOO L)) into (DEFMACRO FOO (X) `(CDADR ,X)) (DEFUN G0001 (X) (CDADR X)) (DEFUN BAR (X) (MAPCAR #'G0001 X)) (DEFUN BAZ (L) (APPLY #'G0001 L)) I'm not sure if this is a good idea. I think really that the meta-evaluator is the right thing to have. GJC has suggested for Macsyma that perhaps a single meta-evaluator should exist which can drive both the compiler and the interpreter so that the semantics of code in the two are uniform... In any case, I'd be interested in hearing others opinions on these ideas. -kmp  Date: 11 November 1980 23:28-EST From: George J. Carrette To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC There should be no need for DEFSUBST, since DEFUN provides the data needed. What functions are candidates for open compilation probably should come from runtime data and experience. SETF could also get data from DEFUN's, its just that its more complicated than doing it via macros, since more is left to the compiler (in this case the SETF macro expander) to determine. People who write macros can and do make all the errors (multiple evaluations, introduction of name-conflicting local variables, etc.) that you mentioned as problems facing a DEFSUBST implementation. -gjc  Date: 12 November 1980 13:03-EST From: Kent M. Pitman Re: Historical clarification on &PROTECT Date: 11 November 1980 12:06-EST From: Jon L White .. I might mention as RMS did that some "cleaner" version of DEFSUBST is what many users (you included) have apparently wanted, but been lacking. The Berkeley crowd diddled their macro definer to get some minimal capability (was it "&SAFE"?), and we've had DEFSIMPLEMAC for some time... ----- Actually, it was &PROTECT. It's in their DEFMACRO compatibility package I think -- not a primitive part of Franz Lisp. Doesn't show up in the manual, anyway. JKF's mail to MACSYMA-I on the subject follows for those interested in what it did... ----- Date: 17 Jul 1980 14:04:06-PDT From: CSVAX.jkf at Berkeley To: macsyma-i@mit-mc Re: special defmacro We are converting many small functions to macros and have run into the problems of a macro argument being evaluated twice. In the case of atoms, this does not matter, but of course is does for function calls. I added a special clause to our defmacro, called &protect, which checks at macro expansion time if certain macro formal variables are bound to lists, in which case it surrounds the macro with a lambda expression. I recall you doing something like this a while ago and I would like to see what your syntax for this is. An example is: (defmacro EVEN (&protect (A) A) `(AND (FIXP ,A) (NOT (ODDP ,A)))) ----- I presume this means that (EVEN X) => (AND (FIXP X) (NOT (ODDP X))) while (EVEN (F)) => ((LAMBDA (G0001) (AND (FIXP G0001) (NOT (ODDP G0001)))) (F)) -kmp  Date: 12 November 1980 23:25-EST From: Earl A. Killian I also think that DEFSUBST and the like is a total crock. Doing real open-coded functions is the right answer. It takes all of 5 lines of code in GLS's S-1 compiler (S1COMP) to handle open functions; basically if you do (DEFUN-OPEN SECOND (X) (CADR X)) the (SECOND mumble) turns into ((LAMBDA (X) (CADR X)) mumble) And that's it. This will "work" in any compiler I know of, though it will only be efficient if the compiler knows how to toss LAMBDA's around (as S1COMP does). However, that's as things should be, the compiler should do these things (i.e. substitute for lambda variables) rather than the user; the compiler will get it right, at least. Also, using LAMBDA extensively can eliminate possible naming conflicts. E.g. I believe LISPM code could lose if it used *L* in a SOME or EVERY, or *SELECT-ITEM* in a SELECTQ. [Note: the above implementation isn't 100% accurate, as S1COMP actually processes the (LAMBDA (X) (CADR X)) in a null lexical environment so that the free variables in the lambda-body (none in this case) actually refer to globals and not to variables lexically apparent at the invokation, but that's not very hard.]  Date: 13 November 1980 02:34-EST From: David A. Moon Re: I suppose I have to put in my two cents about DEFSUBST There is, of course, a software engineering tradeoff as to whether you want to have to be somewhat careful about defining your open-codable functions so that simple textual substitution works, or whether you would rather have an unusably slow compiler. The Lisp machine chose the former alternative although as the feasible memory size and processor power for a personal computer gradually increase the alternative of "doing it right" may become attractive. "Doing it wrong" is not VERY wrong of course or we wouldn't have done it.  Date: 13 Nov 1980 10:28 PST From: Deutsch at PARC-MAXC To: MOON at MIT-MC (David A. Moon) cc: LISP-FORUM at MIT-MC Re: I suppose I have to put in my two cents about DEFSUBST My two cents' worth is on the other side of the fence from Moon's. "An unusably slow compiler" is just rhetoric. My experience is that the hidden glitches resulting from textual substitution rather than LAMBDA-binding fall in the same category as many other such things: they are hard to find, because you don't see the mechanism that is causing things to go wrong; they are hidden from you on a case-by-case basis by the whim of the implementor of the macro; they are easy to miss when you define your own macros; they are easily done right by the compiler. I have wanted the S1-type careful LAMBDA expander in Interlisp for many years.  Date: 13 November 1980 16:04-EST From: George J. Carrette To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Re: I suppose I have to put in my two cents about DEFSUBST Remember a while back when JONL objected to my suggestion (at the time directed at JKF and &PROTECT), that open-coded functions simply LAMBDA-FY? He mentioned the same thing you do now, "an unusably slow compiler". Well, since then we have talked about it a bit, and I showed him a way to "cache" information from DEFUN's so that you can get very close to doing it *right* without duplicating a lot of computations in the compiler. Date: 13 NOV 1980 0234-EST From: MOON at MIT-MC (David A. Moon) To: LISP-FORUM Re: I suppose I have to put in my two cents about DEFSUBST There is, of course, a software engineering tradeoff as to whether you want to have to be somewhat careful about defining your open-codable functions so that simple textual substitution works, or whether you would rather have an unusably slow compiler. The Lisp machine chose the former alternative although as the feasible memory size and processor power for a personal computer gradually increase the alternative of "doing it right" may become attractive. "Doing it wrong" is not VERY wrong of course or we wouldn't have done it. Its simply that your "engineering tradeoff" is a false dilemma, the key is "as to whether *you* want to ... be somewhat careful about defining your open codeable ...". Just who is *you*? Lets take one of the simple cases: (DEFUN FOOP (X) (NOT (MEMQ (TYPEP X) '(BAR BAZ)))) Even the simplest meta evaluator can tell you that a simple substitution is 100% correct here. Given any open compilation there are 3 areas of analysis [1] Things you can say just by looking at the body of the defun. [2] Things you can say just by looking at the actual arguments. [3] Things which involve complicated interactions of body and arguments. Obviously the information of [1] is that which is done by "people" now, but it need not be. A combination of a good [1], (which may *reject* some candidates for open compilation as being too complicated to be worth it), and a simpleminded [2] ( basic SIDE-EFFECTS-P, CONSTANT-P, DEPENDANT-P), seems to cover the existing uses for open compilation very well. => For each candidate for open compilation a decision method for doing the open compilation is compiled. [meta compilation] I think the engineering tradeoff is one of how much time you want to spend hacking the compiler vs. how fast it is vs. how good it is. (vs. of course how the compiler influences coding style througout the system). -gjc  Date: 14 NOV 1980 0434-EST From: RMS at MIT-AI (Richard M. Stallman) Re: DEFSUBST I think it would be fine to change Lisp machine DEFSUBST to avoid evaluating agrs twice or in the wrong order, as long as the simple cases which are used now remain efficient. I don't expect that this would require a terrible slowdown since usually the args are such that it is easy to tell that it is ok to do the direct substitution. Fortunately, this doesn't require a compiler which has any hair about LAMBDA. That probably would be a big loss and is almost certainly out of the question. I don't know why KMP talks about DEFSUBST as if the definition of DEFSUBST included a guarantee that it will evaluate arguments twice or in the wrong order. I never intended that to be part of its contract. DEFSUBST is that which defines an open-codable function. The rest is a matter of how it is implemented. There must be a distinct defining form for open codable functions. It would be intolerable for the compiler to open code any function unless the user gives permission. It is welcome to refrain from open coding a function for some reason, but it has no right to take the initiative to do it. It would screw users several ways: It woull be consufing in debugging It would prevent tracing It would prevent redefinition of the function from working The problems would be reduced if there were a data base which records which compiled functions depend on each open coded function (this would be useful in any case, so it's a good idea). But even wit this feature, it is a bad idea to open-code anything tha the user doesn't expect.  Date: 14 November 1980 12:23-EST From: Kent M. Pitman To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: DEFSUBST clarification Date: 14 NOV 1980 0434-EST From: RMS at MIT-AI (Richard M. Stallman) ... I don't know why KMP talks about DEFSUBST as if the definition of DEFSUBST included a guarantee that it will evaluate arguments twice or in the wrong order. I never intended that to be part of its contract. DEFSUBST is that which defines an open-codable function. The rest is a matter of how it is implemented. ... ----- I didn't say it guaranteed anything one way or another. I said it didn't guarantee things which need to be guaranteed by some function which really tries to solve the problem that RICH was addressing. I think it's a cute, but hackish solution. Its current definition provides little or no insight into what the problem really is or how it can be solved, and it requires the macro-definer to do a partial compilation (deciding if double-evaluation of args will occur and/or if it will matter, etc). -kmp QDate: 19 August 1980 22:39-EDT From: Glenn S. Burke To: Multics-Lisp-People at MIT-MC cc: LISP-FORUM at MIT-ML Re: macro/expr property selection by funcall Having the funcall mechanism ignore a macro property preceding some other applicable property is a loss, and will probably cause bugs and confusion if people try to redefine things. Crocks like this should be limited to the compiler, if they are allowed at all; i believe that it is defined behaviour for complr to use any macro property, even one preceded on the plist by another functional property. I know i have seen this used in the past. So if you need to have both of them present, make the macro come last. A somewhat nicer alternative would be to have a property used similarly to the MACRO property, but ONLY used by the compiler. The multics lisp compiler has such a beast, but the property name is not available to the user (wrong obarray). Then again, an appropriately constructed OPTIMIZER (ala lispm) could subsume such a function, and would be more useful in the long run. My own solution, which is neither pretty nor easy to use without support code, has been to use MACROLIST in pdp-10 maclisp, the *MACRO property on multics (would you believe a load-time-eval to get the symbol off the right obarray?), and the COMPILER:OPTIMIZERS property on the lispm. As for open coding, if i were doing it by using the ordering of the macro/expr properties, i would want random interpreted calls to go through the expr version rather than the macro. That if nothing else would determine the order i would put the properties on in. ,Date: 2 November 1980 02:27-EST From: Alan Bawden Re: Nutz! MacLisp has a new function: MAKE-LIST. This is just the function to call in those cases where you want to cons up a list of nils of a certain length. (How many times have I written that DO loop, I wonder?) The LispMachine has a function MAKE-LIST as well. It also conses up a list of nils. Great! Now I can use it in my code right? Wrongo, cons-breath! Not unless I write it as: (MAKE-LIST #Q 17) NUTZ!  Date: 2 November 1980 05:50-EST From: Jon L White To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC Re: MAKE-LIST Date: 2 November 1980 02:27-EST From: Alan Bawden MacLisp has a new function: MAKE-LIST. . . . Now I can use it in my code right? Wrongo, cons-breath! Not unless I write it as: (MAKE-LIST #Q 17) NIL too has MAKE-LIST; neither MacLISP nor NIL support the "area" consing feature as such, but no doubt a similar thing in NIL will be done this way: (DEFUN MAKE-LIST ( &OPTIONAL ( SI:DEFAULT-AREA)) ...)  Date: 3 November 1980 01:52-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI Re: MAKE-LIST Yes, the Lisp Machine MAKE-LIST should originally have been defined to take the area secondly as an optional argument. What you make Maclisp do depends more on whether you are concerned with compatibility, or the right thing. Both the NIL and Lisp Machine groups seem extremely reluctant to put up with language deficiencies, even minor ones, simply to be compatible with the other.  Date: 3 November 1980 06:56-EST From: Jon L White To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC Re: Compromise? Date: 3 November 1980 01:52-EST From: Daniel L. Weinreb Yes, the Lisp Machine MAKE-LIST should originally have been defined to take the area secondly as an optional argument. What you make Maclisp do depends more on whether you are concerned with compatibility, or the right thing. Both the NIL and Lisp Machine groups seem extremely reluctant to put up with language deficiencies, even minor ones, simply to be compatible with the other. In this case, the MAKE-LIST one, it seems highly unlikely that open-coding is important -- why not let it be a subr which "sniffs" at its two arguments and reverses them if the one which should be an "area pointer" is a "fixnum" and vice-verse? We could then say 1) MAKE-LIST has the "right" definition 2) but practical implementations have a kludge which is "tolerant" of a somewhat lazy programmer (and still is 99.99% fail safe)  Date: 3 November 1980 12:10-EST From: Kent M. Pitman To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: MAKE-LIST -- Compromise? Please, not that way ... Date: 3 November 1980 06:56-EST From: Jon L White Date: 3 November 1980 01:52-EST From: Daniel L. Weinreb Yes, the Lisp Machine MAKE-LIST should originally have been defined to take the area secondly as an optional argument. What you make Maclisp do depends more on whether you are concerned with compatibility, or the right thing. Both the NIL and Lisp Machine groups seem extremely reluctant to put up with language deficiencies, even minor ones, simply to be compatible with the other. In this case, the MAKE-LIST one, it seems highly unlikely that open-coding is important -- why not let it be a subr which "sniffs" at its two arguments and reverses them if the one which should be an "area pointer" is a "fixnum" and vice-verse? We could then say 1) MAKE-LIST has the "right" definition 2) but practical implementations have a kludge which is "tolerant" of a somewhat lazy programmer (and still is 99.99% fail safe) ----- That's not the `right' definition. That's lossage. It smells of DWIM. I think it's far more important for these languages to have regular definitions than for them to be precisely compatible. If they have regular definitions, they can at least be mechanically translated back and forth. The sort of kludge suggested here is ugly and un-called for. Alan's suggestion of (MAKE-LIST #Q area fixnum) is far cleaner ... which doesn't say much. You're talking about the design of two languages which attempt to *correct* some of the kludgeyness in their predecessors (Maclisp) ... I think this is a truly bad idea. Why is it that you can't just change the definition in NIL and recompile the code. I can write you a Teco macro that will do it query-replace-style for a buffer... If you have a tags file, I can make it map the whole file. I think the compromises DLW was suggesting involved actually having one side or the other actually concede the point and change their definition. NIL has little code depending on it right now -- far less than LispM. The NIL source is probably the only body of code that is relying on a particular definition and I think it would be easier for us to change NIL than for LispM to change all of their code and their users' code. If compatibility is really desired, I think that's how it's to be achieved in this case.  Date: 3 November 1980 12:38-EST From: George J. Carrette Re: MAKE-LIST Who says &optional arguments have to be pushed on the right? If MAKE-LIST is called with one argument it could only be a fixnum giving the size. If given two arguments the first could be the area. I have also found a third argument useful, to make a list of 0's or 0.0's. -gjc  Date: 3 November 1980 12:48-EST From: jonl at MIT-MC, kmp at MIT-MC Sender: JONL at MIT-MC Re: What can LISPM community do about MAKE-LIST Despite any previous mail, we sense a general feeling that the optional second argument to MAKE-LIST should be the area rather than the size. Could this be considered the consensus of the LISP-FORUM community? More importantly, NIL really can't accommodate the the current LISPM definition, since there are a whole slew of generalized sequence functions with the same format, and it would be a gross inconvenience to have the semantics of MAKE-LIST differ from the others. E.g MAKE-string, MAKE-vector, MAKE-bits, etc form part of such a nice symmetry that it would be a shame to break it for no good reason.  Date: 4 November 1980 00:34-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI Re: What can LISPM community do about MAKE-LIST Hold everything. I have asked a few LispM hackers, and those I have asked seem to feel that we should change our MAKE-LIST despite the extra difficulty, since taking the area as the first argument is really not right. I have not spoken to everyone yet; I'll do so and get back to LISP-FORUM about it. Until then, don't do anything hasty like changing NIL.  Date: 5 November 1980 1139-EST (Wednesday) From: Guy.Steele at CMU-10A Re: MAKE-LIST kludge Actually, aside from the aesthetics (or lack thereof) of the porposed "twiddle the arguments" kludge, there is a lurking danger. If people get used to being able to write the arguments in either order, then such code will not be back-transportable to the LISP Machine. Moreover, the LISP Machine cannot use the kludge; it uses fixnums to identify areas. Therefore one cannot tell whether (MAKE-LIST 3 5) was intended to make a 3-list in area 5, or a 5-list in area 3.  Date: 3 November 1980 12:19-EST From: Jon L White To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: MAKE-LIST -- compromise or not? Your lengthy note on this subject simeply suggested that NIL accept the buggy LISPM definition, and roll backwards, right? Why not let those in the LISPM community express a view as to whether not it would be feasible and preferable for the LISPM definition to change? eDate: 11 November 1980 12:32-EST From: Jon L White To: SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC Date: 9 Nov 1980 (Sunday) 0116-EDT From: SHRAGE at WHARTON (Jeffrey Shrager) Subject: A suggested enhanced naming convention How about addiing functions to replace (eventually) the MAPCAR, MAPCAN... . . . NIL has the special form "MAPF", which even works in MacLISP; it isn't "autoloadable" in maclisp, but comes defined as a macro in the file LISP:DRAMMP.FASL (several other NIL facilities are in that file). There may be some more general extension beyond MAPF, but for those of you who weren't in on the discussion of two years ago, the definition of MAPF follows, lifted from the file MC:NIL;NEWFUN > MAPF - (MAPF ... ) This is an attempt to generalize all the various mappers by having the first two arguments specify the various options. *** These first two items are not evaluated, but the remaining arguments are evaluated just as in MACLISP. must be among {LIST, NCONC, PROJ1, VECTOR, STRING, BITS, +, +$} is a list of symbols which are source-descriptors; a single symbol is treated like an infinite list of that symbol. The descriptors are among {LIST, CAR, VECTOR, STRING, BITS, 1+, 1-, 1+$, 1-$, CONSTANT} should evaluate to a function of n arguments, and ... evaluate to the argument sources for The meaning of the is: LIST Return a LIST of all the successive results of the application of NCONC Same as LIST, but as if NCONC were applied to the result, flattening it by one level. PROJ1 Return the value of , as MAPC would. VECTOR Return a VECTOR of all the successive results of the application of STRING The result of each application must be a CHARACTER - Return, then, the STRING of all those CHARACTERs. BITS The result of each application must be either 0 or 1 - the resultant 0's and 1's are packed into a BITS. +, +$ Return the numerical sum of all outputs. "+" specifies that they will all be FIXNUMs, "+" for FLONUMs. The meaning of the is: LIST The argument source is a list, which is successivly CDR'd after successive applications. CAR Same as LIST, except CAR of the list is given as the argument to VECTOR The successive items of a VECTOR are given STRING The successive characters of a STRING are given BITS The successive bits (0's or 1's) of a BITS are given 1+, 1+$ The argument source is a FIXNUM (or FLONUM in the "1+$" case) which is successively incremented by 1. 1-, 1-$ Same as "1+" case, but is decremented by 1. CONSTANT The argument source is repeatedly given, unmodified as the corresponding argument to ; it ** DOES NOT ** mean that the type of the source is a constant, but only that it is not "stepped". Right now, not all the combinations are worked out, but these will certainly be installed initially: (MAP FOO . rst) ==> (MAPF 'PROJ1 'LIST FOO . rst) (MAPLIST FOO . rst) ==> (MAPF 'LIST 'LIST FOO . rst) (MAPC FOO . rst) ==> (MAPF 'PROJ1 'CAR FOO . rst) (MAPCAR FOO . rst) ==> (MAPF 'LIST 'CAR FOO . rst) (MAPCON FOO . rst) ==> (MAPF 'NCONC 'LIST FOO . rst) (MAPCAN FOO . rst) ==> (MAPF 'NCONC 'CAR FOO . rst) (MAPVECTOR FOO . rst) ==> (MAPF 'VECTOR 'VECTOR FOO . rst)  Date: 11 Nov 1980 1315-EST From: MARC at MIT-XX To: JONL at MIT-MC, SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC MDL has had a similar construct for quite some time (five years?). There are two SUBRs, MAPF (MAP-First) and MAPR (MAP-Rest), which map over successive elements and successive RESTs (CDRs) of structures. The syntax is: The way it works is that the loopf (usually an imbedded FUNCTION but also permitted to be an externally defined function or a SUBR) is applied to successive elements (or RESTs) of the strucs. This continues until one of the strucs "runs out" (becomes empty), or the loopf performs a MAPLEAVE or a MAPSTOP. If the finalf is a FALSE (NIL in conventional Lisps), MAPF/R is thus just a FOR-EACH style loop. If the finalf is applicable, it is applied to the accumulated values of the applications of the loopf. The loopf may control the accumulation of values itself by performing MAPRETs inside itself. The arguments of MAPRET (any number) become the values from that application of the loopf. MAPSTOP is like MAPRET except that it says, after accumulating these arguments, apply the finalf and return. MAPLEAVE says "return my argument as the value of the MAPF/R". Note also that if no strucs are supplied, MAPF becomes a sort of generator function to build structures. Examples: ) (ELSE )>> .CANDIDATES> returns a list of the members of CANDIDATES that pass a certain test. .Y> )>> .X> is precisely . > .HASH-TABLE> returns a list of the elements in a hash-table. (! is the MDL construct that means "take my elements instead of me") MDL, of course, is coming soon to a machine near you... Yours, Marc Blank Dave Lebling  Date: 11 Nov 1980 10:30 PST From: Deutsch at PARC-MAXC To: JONL at MIT-MC (Jon L White) cc: SHRAGE at WHARTON-10, LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC MAPF is an attempt to solve the generality of mappers problem. Interlisp makes a different kind of attempt: rather than a function, it provides a fairly complex special syntax for constructing iterations. I think this approach is a serious mistake; however, the Interlisp facility IS more general than MAPF and should be looked at by anyone thinking about this problem. See the section on "Iteration Statements" in the CLisp chapter of the Interlisp manual, but PLEASE try to read it for the facilities it provides rather than the way it provides them.  Date: 11 November 1980 13:39-EST From: Jon L White To: deutsch at PARC-MAXC cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC, SHRAGE at WHARTON-10 Re: Iteration Facilities Indeed MAPF is a very modest attempt to abstract out the "MAP" type facilities. the LOOP facility is the more general user-oriented approach, and it is currently being distributed with MacLISP. LOOP grew out of several years of evolving iteration facilities, heavily influenced by InterLISP's; one very interesting aspect about it is that the source code works well in all of PDP10 MacLISP, MULTICS MacLISP, the LISPMachine, and NIL. Glenn Burke and Dave Moon have a printed manual for using it, but the machine-source for that manual is itself quite readable; see the file LIBDOC;LOOP DOC on any ITS machine. Date: 25 January 1981 03:30-EST From: Earl A. Killian Re: /\ For the LISPM at least, it would seem to me that it'd be winning to allow the user to specify the quote character in the -*- line, just like the base is specified there when nonstandard. People could then convert individual files at any time, and then update the -*- line. Some distant day in the future could change the default, and then people that didn't want to be bothered would simply have to put specify \ there. Unfortunately, I don't think this would help Maclisp users much.  Date: 25 Jan 1981 0515-EST From: Dave Andre To: LISP-FORUM at MIT-AI cc: DLA at MIT-EECS Re: /\ Mess. I vote for the change. "/" is just to common a character to be a quote character, and "\" really has no other use. In a winning file system, all files would have arbitrary properties, and for the transition, a new property which tells lisp readers of the file whether the format is "new" or "old" would be the answer. Currently file properties such as this are kept with the text, on the -*- line. This has the obvious disadvantage that, in order to change the file's properties, one has to change the text. However, since it is the only property facility that exists at the moment, it's probably the right thing to use. So I propose that, at some point in time, anything which uses lisp code check for a "Format: New" in the file's property list, and if it's not, offer to translate the file before "using" it. The problem with this scheme is that not all people use the -*- stuff at the beginning of a file, and some might want to flush it after it has been written out by the translator. As I see it, the only other viable option to determine the file's format is to use the file's date, as suggested before. The problem with this is that people may write code using the old quoting mechanism after the magic date. I see the problems with my scheme as easier to cope with. -- Dave  Date: 25 JAN 1981 2259-EST From: Moon at MIT-AI (David A. Moon) To: DLA at MIT-EECS cc: LISP-FORUM at MIT-AI Re: /\ Mess, upward compatibility I was assuming that if this change is made, in the Lisp machine there will be a new -*- entry, Syntax:Lisp meaning the new standard, Syntax:Oldslash meaning the old way. This would minimize the immediate need for painful file conversion. We could even defer making Syntax:Lisp the default for a while. This would affect both the Lisp reader and the editor. This Syntax: property is scheduled to be installed anyway, to ease other syntax extensions, mostly by users rather than the system. Unfortunately things aren't so easy in the case of Maclisp and Emacs.  Date: 25 January 1981 23:14-EST From: Kent M. Pitman To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Certainly Emacs is no problem. M-X Lisp Mode can be trivially made to look at the information in the -*-...-*- region and set q..D appropriately. We may be able to coax Maclisp to win, too; we'll see. Might I suggest that Syntax: take a list instead of a symbol? Eg, -*- Mode: Lisp; Syntax: (Slash MyOption1 ...); -*- or maybe even an alist of features and values as in -*- Mode: Lisp; Syntax: ((Slash . nil) (MyOption1 . T) ...); -*- if you don't already have something like this in the works?  Date: 25 January 1981 23:29-EST From: George J. Carrette To: Moon at MIT-AI cc: DLA at MIT-EECS, LISP-FORUM at MIT-AI Re: /\ Mess, upward compatibility Date: 25 JAN 1981 2259-EST From: Moon at MIT-AI (David A. Moon) To: DLA at MIT-EECS cc: LISP-FORUM at MIT-AI Re: /\ Mess, upward compatibility I was assuming that if this change is made, in the Lisp machine there will be a new -*- entry, Syntax:Lisp meaning the new standard, Syntax:Oldslash meaning the old way. This would minimize the immediate need for painful file conversion. We could even defer making Syntax:Lisp the default for a while. This would affect both the Lisp reader and the editor. This Syntax: property is scheduled to be installed anyway, to ease other syntax extensions, mostly by users rather than the system. Unfortunately things aren't so easy in the case of Maclisp and Emacs. I don't see how Maclisp could possible win if you expect it to read and process what is in a read-time ";" comment. Does a LOAD on the LISPM really go outside of the language and parse what is in a ";" comment? Shouldn't that info be in package definitions, or at least be lisp readable in the file? With all the language purity and philosophy arguments I hear from Lispm people, I really wonder sometimes. -gjc  Date: 26 January 1981 00:19-EST From: Alan Bawden To: GJC at MIT-MC cc: MOON at MIT-MC, LISP-FORUM at MIT-MC Re: George, take a walk until your hat floats. Date: 25 January 1981 23:29-EST From: George J. Carrette ... Does a LOAD on the LISPM really go outside of the language and parse what is in a ";" comment? Shouldn't that info be in package definitions, or at least be lisp readable in the file? With all the language purity and philosophy arguments I hear from Lispm people, I really wonder sometimes. If we had a file system that would allow us to associate arbitrary properties with files (property lists) then we wouldn't have to store this information on the first line of the file. It has nothing to do with the LISP language. Some files start out with a line saying "-*-Mode:Text-*-", should that be "lisp readable"? Take a walk George.  Date: 26 January 1981 00:38-EST From: George J. Carrette To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC, MOON at MIT-MC Re: George, take a walk until your hat floats. Date: 26 January 1981 00:19-EST From: Alan Bawden Date: 25 January 1981 23:29-EST From: George J. Carrette ... Does a LOAD on the LISPM really go outside of the language and parse what is in a ";" comment? Shouldn't that info be in package definitions, or at least be lisp readable in the file? With all the language purity and philosophy arguments I hear from Lispm people, I really wonder sometimes. If we had a file system that would allow us to associate arbitrary properties with files (property lists) then we wouldn't have to store this information on the first line of the file. It has nothing to do with the LISP language. Some files start out with a line saying "-*-Mode:Text-*-", should that be "lisp readable"? Take a walk George. Is this a serious argument? So what if a file says "-*-Mode:Text-*-", if you LOAD that file does it cause an error message: "Hey this file is not Mode:Lisp!" to be printed out? or do you just get a syntax error. Your filesystem argument assumes a false dilemma, showing that you didn't read my note very carefully. Convince me that a mature system needs hacks like these in order to associate modules with a backpointer to the module class? Tell me why the lisp environment isn't sufficient to contain this information. -gjc  Date: 26 January 1981 03:34-EST From: Robert W. Kerns To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: George, take a float until your hat walks. GJC, putting the file-plist info into LISP forms that go inside the file is obviously bogus, because this information is not at all specific to LISP files. TEXT files, MACSYMA files, etc, all want a file PLIST, and it is nonsense to demand that TEXT and MACSYMA files have LISP-format information in them. And the question "Tell me why the LISP environment isn't adaquate to contain this information" is like asking "Why have files, can't we just keep everything in ZWEI buffers?".  Date: 26 January 1981 09:55-EST From: George J. Carrette To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: mode lines I'm not against mode lines per say, I just think the present implementation is un-necessarily crufty, really a relic from ITS Emacs, and one which would take extra amounts of code to implement in maclisp, finally arriving and an inferior and weak feature. I have nothing against putting (-*- Mode Lisp Package Macsyma Syntax Old Base 8 -*-) in a file. And I fail to see the objection a Tex user would have for %% (-*- Mode Tex -*-) or the macsyma user for /* (-*- Mode Macsyma -*-) */ My argument is simply that the above is easier to parse, for human or computer, than ;;; -*- Mode: Lisp; Package Macsyma; Base 8 -*- which looks more like the syntax of Algol. What could be the objection to supporting a sharp-sign feature which controlled read syntax? e.g. #S(Mode Lisp Package Macsyma Syntax Old Base 8) %% #S(Mode Tex) /* #S(Mode Macsyma) */ The argument is "why cruft up your lisp world, the one in which most of your work is done, with notions and features which are not native to it, which are difficult to support in the mother tongue (i.e. Maclisp)?" The claim is that in this climate of many suggested changes, for example "/" <=> "\", there should be some interest in increasing the power of features which can be use insulate innocent users from changes. Recall Moons note where he announced a new "modeline" feature for the Lispm, he dismissed the poor maclisp users with the statement, I guess there will be more of a problem doing this in maclisp. I think that is a callous and unimaginative attitude. [Alan, who has expressed favour for the status-quo on the "/" issue certainly doesn't address the transportable modeline problem with arguments which start off by talking about the taking of long walks and floating.] -gjc  Date: 26 Jan 1981 1236-EST From: Dave Andre To: GJC at MIT-MC cc: DLA at MIT-EECS, lisp-forum at MIT-MC, Moon at MIT-MC Re: /\ Mess, upward compatibility The issue is not in language purity, but in data storage. When a program, such as LOAD wants to load some data, it has to know what format the data is in. Currently there are two separate formats: QFASL and S-expressions. Ideally the file system should know what the format of the data in the file is, and be able to return it when asked for it. The fact that this information is stored with the data is a loss in the current file system, but the only real alternative to re-writing the file system so that files can have arbitrary properties. Currently, on the lisp machine, you can ask a file what it's properties are, and it returns the contents of the -*- line. This does not require one to write a parser for the line; it already exists. The fact that such a feature doesn't exist in Maclisp, is what Moon was refering to (I believe).  Date: 26 January 1981 12:43-EST From: Robert W. Kerns Sender: RWK0 at MIT-MC To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: file mode lines The fact that you'd even consider demanding that it be LISP format rather than requiring parsing it reflects strongly on the kind of program you'd consider writing in MacLisp. If you were going to write an editor in MacLisp, (or if you were going to make use of this feature in a program in another language, say TECO), then you'd consider the ability to use READ to be of much less importance. It should be easy enough to provde a $LOAD or some such name which does just what you want. You could even take the LISPM code, probably.  Date: 26 January 1981 16:03-EST From: Kent M. Pitman To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC, RWK at MIT-MC Re: mode lines Date: 26 January 1981 09:55-EST From: George J. Carrette Re: mode lines ... I just think the present implementation is un-necessarily crufty, really a relic from ITS Emacs, and one which would take extra amounts of code to implement in maclisp, finally arriving and an inferior and weak feature ... I heartily agree. This really hits the mark. The file property list in its present form is indeed a hard thing to find and decipher. I had formerly not given much thought to the situation, but indeed, there is no reason that it could not be given Lisp-readable syntax. In the worst case, you might have to write something of the form... (FILE-PLIST Mode Lisp Package Macsyma ...) ; -*- Lisp -*- where the -*- Lisp -*- was just for Emacs' sake and was redundant with info in the file property list. Indeed, suppose Zwei wanted some new mode like "Mode LispMLisp" which Emacs didn't support [yet], you might actually want to be specifying the two things independently anyway... I have nothing against putting (-*- Mode Lisp Package Macsyma Syntax Old Base 8 -*-) ... The above is easier to parse, for human or computer, than ;;; -*- Mode: Lisp; Package Macsyma; Base 8 -*- which looks more like the syntax of Algol... What could be the objection to supporting a sharp-sign feature which controlled read syntax? e.g. #S(Mode Lisp Package Macsyma Syntax Old Base 8) Actually, one might oppose the use of #S on the grounds that doing #S(Mode Lisp Package Foo Sharpsign Disabled) (LOAD "...") where "..." has a #S(...) at the top. If you take the attitude that every file needs to be read under default settings, then you have a problem that if the guy sets syntax of chars manually in a file (eg, init file), the effects don't stick after the load. An intermediate suggestion between GJC's and LispM's current one, would please me. Eg, if a file starts with ;;; -*-( ZweiMode Lisp Package Foo ...)-*- then the form after the "Modes:" would be in Plist form, fit for reading with lisp read. This would make parsing it infinitely easier. Further, for the sake of varying the syntax table, I suggest that people be strongly encouraged to put in it only symbols with alphabetic printnames, and only vanilla lists. No dotted pairs or funny syntaxes. This will insulate people from the backslash/slash controversy, strange macro characters, lisps that can't read tokens like .FOO., etc. The list be read in base 10 (maybe this is already done on LispM, I dunno) so that people can do -*-( ... Ibase 10 )-*- or -*-( ... Ibase 8 )-*- as appropriate. You might even want to bind the syntax table explicitly so that alpha chars were read as symbols, digits as digits or extended alphabetics, and all else (eg, space, tab, and even ";") as whitespace. This lets you do ;;; -*-( ZweiMode Lisp ;;; Package Foo )-*- and still win because the ";;;" would be whitespace... And lisps that didn't know about the convention wouldn't be confused by such an extended syntax. I really think we ought to take George's comments very seriously. I have yet to see elegant code to hack the file property-list. Many different languages have to understand these things, and the current algolish syntax has really been painful for me to interpret [correctly] from both Maclisp and Teco... -kmp  Date: 26 January 1981 16:19-EST From: Kent M. Pitman To: ALAN at MIT-MC cc: GJC at MIT-MC, LISP-FORUM at MIT-MC, MOON at MIT-MC Re: Floating hats Date: 26 January 1981 00:19-EST From: Alan Bawden To: GJC, MOON, LISP-FORUM Re: George, take a walk until your hat floats. Date: 25 January 1981 23:29-EST From: George J. Carrette ... Shouldn't that info ... at least be lisp readable in the file? ... ... Some files start out with a line saying "-*-Mode:Text-*-", should that be "lisp readable"? ... ----- I'll argue that it should be. I am a firm believer that in spite of "#" and readmacro characters and format ~mumbles and loop, the thing that makes lisp win is that that it really has a simple syntax. the parser needed for vanilla lisp syntax is about the simplist you could devise. Further, many non-lispy languages have support for grokking lisp syntax -- eg, Teco and Midas. How many can boast of similar support for an algolesque syntax like -*-Mode:Text-*- -- not even Algol itself ... So I think you have to agree that George's argument is not so far out of line as you might have originally felt. Fredkin, in introducing Lisp to 6.034 when I took it a few years back, claimed that one of the advantages of lisp was that it was `syntax free' ... I claim that arguing for Lispy notation is just arguing for syntaxlessness and is reasonable. -kmp  Date: 26 Jan 1981 1624-EST From: Dave Andre Re: mode lines To: KMP at MIT-MC cc: DLA, LISP-FORUM at MIT-MC Why doesn't someone re-write the ITS file-system to include properties which aren't in the text of the file? A standard operation on the file would be to ask it what the foo property is, and another one would set the foo property. Arbitrary properties could be supported, and properties would be kept separate from the text of the file, where they probably aren't appropriate. Such a feature is going to be in the Lispm file system when it comes out, and it makes a lot of sense.  Date: 26 January 1981 17:26-EST From: Kent M. Pitman To: DLA at MIT-MC cc: LISP-FORUM at MIT-MC Re: ITS getting file properties I would be willing to give up file authorship if that area in the file directory could be used to store file property lists instead ... the advantage to this is that ITS wouldn't have to be rewritten totally. This disadvantage, of course, is that each file couldn't store more than -- uh, what is it? -- nine bits of information in its plist? ... It's not my place to speak for moon and the rest of the ITS wizards, but my guess is that such a task couldn't be undertaken right away. You might find it more productive to keep an eye on the weather reports for Hell -- look for signs of heavy frost. Sorry for the sarcasm. Couldn't resist.  Date: 26 JAN 1981 1754-EST From: DLW at MIT-AI (Daniel L. Weinreb) Re: -*- line Many messages about this subject have been very confused. It would take more time than I can afford to answer everyone personally and point-by-point. Other messages on this topic have been cogent and correct, and so I will be repeating some things people have said. No, the -*- line is not a relic from ITS EMACS. ITS EMACS introduced the simple use of -*- to express the major mode. I invented the generalization into property lists, in response to ITS's lack of a property list feature in the file system. I introduced it SPECIFICALLY for Lisp. I INTENTIONALLY did not use a Lisp-like syntax. The application in question was for the Package property. It is impossible to modify ITS to put in file property lists. 100% out of the question; don't even think about it. File property lists have nothing whatsoever to do with Lisp. There is abolutely no reason for them to be in Lisp-like syntax. If we had ALGOL or SNOBOL-4 on ITS, they would use file property lists too. The syntax of file property lists is extremely simple to the point of being near-trivial. If you have trouble parsing them, then you cannot program your way out of a paper bag. Furthermore, once a parser has been written for a given language system, it is written, once and for all, and anyone can use it. Such code exists in the Lisp Machine and in Emacs; nobody ever has two write the parser again. Writing one for Maclisp would be an easy task for any reasonably competent Maclisp programmer. The current implementation of file property lists has the extremely winning property that you can edit them without having to learn a whole new bunch of file system commands, all of which would probably have to get implemted in DDT, EMACS, ZWEI, et. al. They get saved with backup tapes. They get coied when the file gets copied, and this fact is OBVIOUS (nobody needs to wonder whether or not they get copied along with the file, to wonder whether, like the creation date, they are might not be considered part of the contents of the file). The Package property is used for several things, not all of which I can reconstruct right now. Indeed, you can get this information from the package declaration. However, it is useful for error-checking to have the information in the file as well, and it is also useful so taht you can read in a file without necessarily pre-loading the package declaration, in simple cases. I think the file property list works the right way. I see no reason for changing anything about it.  Date: 27 JAN 1981 0256-EST From: dlw at MIT-AI (Daniel L. Weinreb) Re: Explanation about file property lists. I spoke to KMP and found that the following points probably need to be cleared up: File property lists are defined in string-oriented terms. That is, they looks like "-*-" "Indicator" ":" "Value" ";" "Indicator" ":" "Value" "-*-" (in the case of a two-property-long list). Indicators and Values are character strings. They may contain numbers and alphabetic characters (including hyphen as an alphabetic). You can also use commas to have several value strings. They are just character strings; it is up to various programs to interpret them. You can't do computations with them, they are not Lisp data structure, they do not allow reader-macros, etc. They are extremely simple. That's all there is to them. The Lisp Machine happens to have a function that opens a file, finds the property list, and creates a Lisp list corresponding to it, representing indicators by symbols in the keyword package and so on. And when it sees commas in a value, it creates a list. However, this function exists only because it is a convenient interface for Lisp programs to use. There is nothing implicit in the file property list that has anything to do with Lisp, at all.  Date: 27 January 1981 03:12-EST From: Kent M. Pitman Re: more on file prop lists i might also add that a few details that dlw left out of his clarification: ";" and ":" can't occur in "indicator" or "value" and currently no file property list hacking code claims that it will do the right thing with multiple-line property lists. hence, my concern was over the problem of parsing a much more complex structure in a file plist; something that had not been well-advertised was that a file plist can contain no such complex structure, so it is manageable. of course, this doesn't say that a more elaborate structures in the file plist might not be useful some day, but since no one has needed them in the 2 years or so that lispms have been in use, i suppose there is no reason to really push for the introduction of such primitives until we get some more experience with the power of simple file properties ... so i am content for now with dlw's explanation. -kmp Date: 3 MAR 1981 1108-EST From: DGSHAP at MIT-AI (Daniel G. Shapiro) Re: mouse explanation line in sys 300A, 65.0 Just wanted to say that the explanation line for mouse buttons is an excellent idea. Dan Date: 5 September 1980 1021-EDT (Friday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: Multiple values I think that on grounds of cleanliness I too would want PROG1 to handle multiple values from its first argument. Without that there doesn't seem to be any simple way to get something evaluated after the possibly-multiple-valued value-returning expression is evaluated. On the S-1 I had planned to use a reserved register in the following way (I think): for ordinary calls it is not used, and so they are not slowed. A multiple-value call must push the register and then set it to 1; on return the caller must pop it. A callee which returns multiple values sets the register to the number of values it is returning for the caller to look at. Hmm, there's a problem here. I'll have to look up the true solution. 'Date: 17 April 1981 2110-EST (Friday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC cc: Scott.Fahlman at CMU-10A Re: Proposed mildly incompatible change to MULTIPLE-VALUE(-BIND) I would like to suggest that the procedure call and procedure return interfaces be made more similar by means of the following slightly incompatible change to MULTIPLE-VALUE and MULTIPLE-VALUE-BIND. Recall that currently each of these takes a list of variables and a form to evaluate; MULTIPLE-VALUE-BIND additionally has a PROGN-body. Each evaluates the form, producing possibly multiple values; the variables are then SETQ'd or bound, respectively, to the values. Now, currently, if not enough values are supplied, then the extra variables receive NIL, while if too many values are produced the excess ones are discarded. I propose that the list of variables be completely identical in syntax to a LAMBDA-list. This includes the use of &OPTIONAL and &REST. One can get precisely the current functionality by inserting &OPTIONAL at the front of the variables list. In addition, one would be able to get non-NIL default values; a list of some of the values, using &REST; and better error checking, because one can *require* that certain values be delivered. This is an incomatible change, but will screw only programs which depend on the defaulting to NIL or the discarding of excess values. Such programs are easily fixed by inserting &OPTIONAL or &REST IGNORE, respectively. I suspect that most programs ask for exactly the number of variables they are going to get, however, and so would be unaffected but better error-checked. --Guy  Date: 17 April 1981 21:53-EST From: David A. Moon To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC, Scott.Fahlman at CMU-10A Re: Proposed mildly incompatible change to MULTIPLE-VALUE(-BIND) This sounds superficially plausible and has been proposed many times before. The reason it has never been accepted is that in practice most functions which return multiple values rely on the feature that extra values the caller does not want are thrown away. Calling and returning aren't really as symmetric in practice as they are in theory. It probably would be useful to have a way of requiring at least a certain number of values to be returned. Of course, then all the functions which rely on extra values being supplied NIL would have to be fixed. This is pretty common practice when you have code like (and (values )).  Date: 22 April 1981 17:00-EST From: Jon L White To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC, Scott.Fahlman at CMU-10A Re: Proposed mildly incompatible change to MULTIPLE-VALUE(-BIND) As it happens, the MacLISP implementation of Multiple-values does a quick, 1-instruction-in-compiled-code check for too-few returned values, and if so, then merely calls a function which defaultly padds out the return-value-vector with nulls; the LISP RECENT note announcing that feature mentions how a user can switch it to other actions, such a complaining about mismatch, etc. There would be virtually no penalty for implmenting your "mildly incompatible suggestion" in that code. However, I tend to agree with he comment made by Moon "The reason it has never been accepted is that in practice most functions which return multiple values rely on the feature that extra values the caller does not want are thrown away." So it would be somewhat of a burden to make the change now; primarily because of the backwards-compatibility mess, rather than the implementational problems.  Date: 23 April 1981 1354-EST (Thursday) From: Guy.Steele at CMU-10A Re: Proposed mildly incompatible change to MULTIPLE-VALUE(-BIND) I guess changing MULTIPLE-VALUE and MULTIPLE-VALUE-BIND to take the &OPTIONAL and &REST syntax would be more trouble than it is worth in practice. I'll no longer push the idea. --Guy Date: 1 December 1980 03:53-EST From: Jon L White Re: null LAMBDA bodies? Should (LAMBDA (x)) really be permitted? I.e., should the interpreter and compiler infer that a () return value is warranted. Should (LET ((x y))) be permitted? How about (PROGN)? I would have preferred the error checking for at least LET and LAMBDA.  Date: 1 December 1980 04:33-EST From: Kent M. Pitman Re: ((LAMBDA ())) => NIL ? I think programs which construct programs need the flexibility of having ((LAMBDA ())) and (PROGN) return NIL. Interlisp users should be happy with that since omitted cruft seems to always be the same as a list of NIL's. Eg, consider a program which accepts a list of forms to evaluate on some condition (DEFMACRO-FOO-CONDITION-HANDLER (&REST FORMS) `(DEFUN FOO-HANDLER () ,@FORMS)) Do you want to tell the user that if he's zero'ing out what FOO-HANDLER does, he must type (FOO-CONDITION-HANDLER NIL)? That's silly. It wants a list of forms to be EVAL'd. You don't want to evaluate anything; evaluating NIL is not the same thing. Requiring the macro-writer to remember to write: (DEFMACRO-FOO-HANDLER (&REST FORMS) `(DEFUN FOO-HANDLER () NIL ; sigh, in case of no forms... ,@FORMS)) is just as bad. (LAMBDA (...stuff1...) ...stuff2...) should be like (LAMBDA (...stuff1...) (PROGN ...stuff2...)) and I think there's really no question that (PROGN) needs to return NIL. Program-writing-programs can come up with the case of `(PROGN ,@STUFF) where stuff is NIL very easily and making it an error is just asking for lossage. It's clear when it arises that the guy didn't want to do anything. Why remove the obvious semantics to force an error on a guy in a case where what he wanted was indeed the original obvious semantics?  Date: 1 December 1980 12:50-EST From: Daniel L. Weinreb Sender: dlw at CADR8 at MIT-AI To: JONL at MIT-MC, LISP-FORUM at MIT-MC Re: null LAMBDA bodies? I agree with KMP; all of those things should be permitted. That is exactly the sort of thing that macros can generate easily. After all, we allow PROGs (and therefore DOs) with null bodies; we should also allow PROGNs (and therefore LETs, LAMBDAs, etc) with null bodies. Also note that COND clauses have an "implicit PROGN" which may be empty. I think it is important that the language be consistent about this, for the sake of clarity, and also that it allow empty PROGNs for the sake of macros.  Date: 1 December 1980 20:00-EST From: Richard L. Bryan To: JONL at MIT-MC, LISP-FORUM at MIT-MC Re: null LAMBDA bodies? I agree with KMP and DLW: empty LAMBDAs, PROGNs, etc. are sufficiently consistent, well defined, and useful enough to include in the language. Unfortunately, the "implicit PROGN" of a COND clause is *NOT* subject to the same generalization at all. Somehow this seems to break all the arguments here based on language consistency.  Date: 2 December 1980 1238-EST (Tuesday) From: Guy.Steele at CMU-10A Re: null LAMBDA bodies? Actually, the "implicit PROGN" of a COND is consistent with other implicit PROGN's, if you take the point of view that the predicate is actually the first element of the PROGN. Then it is always true that the last element is returned. What is inconsistent is the funny test after the first PROGN element. Under this interpretation PROGN of zero arguments is then consistent: (COND ((FOO X) Y) () (T GLEEP)) That second clause is an empty PROGN, which produces (), so the test fails, and... Existing implementations probably barf at that, but it *is* reminiscent of the treatment of top-level ()'s in an ASSQ list, isn't it?  CDate: 2 December 1980 18:41-EST From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Special names for Open-Coded Functions Date: 2 December 1980 1034-EST (Tuesday) From: Guy.Steele at CMU-10A To: Moon@AI, Bug-Lisp Re: MAX not open-coded I lobbied some time ago to change the current MAX and MIN to MAXIMUM and MINIMUM, and have MAX, MIN, MAX$, MIN$ -- but it would have been incompatible. Sigh. ----- I'm not convinced that the idea of making special operators that do the same thing as generic operators. Several issues get mixed every time this comes up and I think it's deserving of some group analysis. Let me break things into what I think are distinct cases... [a] `Simple' Open-Coding It is the case, however, that (I'll uses GLS's MAX naming convention from above) (MAX x y) is in all cases the same as (MAXIMUM (FIXNUM-IDENTITY x) (FIXNUM-IDENTITY y)) in the interpreter, so it is reasonable to open-code this in the compiler. In some sense then, having a name around (MAX x y) is only sugar and not really necessary. Surely any compiler that could recognize one case could recognize the other. Moreover, even if it was looking for MAX, we should hope that writing (MAXIMUM (FIXNUM-IDENTITY ...) (FIXNUM-IDENTITY ...)) would not slow us down, since we've supplied enough information to trivially allow the compiler to do the right thing in all cases. That means, then, that we'll at least have to justify having the abstract operation MAX being different than MAX. [b] `Hard' Open-Coding There is a problem with +/+$/PLUS in that there are differences in effect between some of them. Eg, (+ x y) is not the same as (PLUS (FIXNUM-IDENTITY x) (FIXNUM-IDENTITY y)) in the Maclisp interpreter and it's a (maybe good, maybe bad) efficiency hack that they're compiled the same. Eg, (PLUS (FIXNUM-IDENTITY #o377777777777) (FIXNUM-IDENTITY #o377777777777)) is not the same as (+ #o377777777777 #o377777777777) in Maclisp. Compiled, both return -2 while interpreted the former returns #o777777777776, a bignum. I think here that the Maclisp compiler does the user a dis-service by turning (PLUS (FIXNUM-IDENTITY ...) (FIXNUM-IDENTITY ...)) into (+ ...) because now cases where the user has a macro (FOO) which expands into a piece of code that is (FIXNUM-IDENTITY ...) and a macro BAR which he has not written but which turns out to be clever enough to realize that (BAR (FIXNUM-IDENTITY ...)) is the same as (FIXNUM-IDENTITY (BAR ...)) -- eg, SETF? -- and he does (PLUS (BAR (FOO)) 3) someplace where he really wants to force the fancy PLUS addition that makes overflow turn to bignums, he has no way of expressing that want or even of trivially recognizing that an open-coding will occur. So let's assume that this equivalence is a bug in Maclisp. Ignoring the issue of how to APPLY or MAP macros, we couldn't even say (+ x1 x2 x3 ...) <=> (FIXNUM-IDENTITY (PLUS (FIXNUM-IDENTITY x1) (FIXNUM-IDENTITY x2) (FIXNUM-IDENTITY x3) ...)) because something like (FIXNUM-IDENTITY (PLUS (FIXNUM-IDENTITY #o377777777777) (FIXNUM-IDENTITY #o377777777777) (FIXNUM-IDENTITY (- #o377777777777)))) is simply not the same as (+ #o377777777777 #o377777777777 (- #o377777777777)) because + asserts something overly strong for this case. Namely, (FIXNUM-IDENTITY (PLUS (FIXNUM-IDENTITY (PLUS (FIXNUM-IDENTITY #o377777777777) (FIXNUM-IDENTITY #o377777777777))) (FIXNUM-IDENTITY #o377777777777))). So let's have a simplicity criterion here that says MAXIMUM are simple to open code (relatively speaking) on a given set of inputs that are homogeneous in type because the correct MAX algorithm can be chosen from just that criterion. By the same criterion, PLUS (of more than 2 arguments) is hard to open code because it involves making assumptions about the intermediate results expected. By this criterion, I think +/PLUS really do have a better justification for having separate names. I'd like to see us get away from having lists of the optimizations we know to expect and trying to do a particular declaration so that a particular optimization we know of will happen. FIXNUM-IDENTITY and FLONUM-IDENTITY are useful tools in that respect -- they're just far too hard to type and probably too restrictive. Wouldn't it be neat if you could do (MAX (ASSERT (CAR LIST) #'PLUSP) (ASSERT (CDR LIST) #'MINUSP)) and have the compiler infer this to be the same as (CAR LIST)? (Probably supplying a predicate wouldn't be the right thing, but you get the idea.) In any case, I think the LispM is going the wrong way by assuming that just because arithmetic is handled in microcode, that interpreted error checking and compiler-error-checking primitives like FIXNUM-IDENTITY, etc. are bad. We need to encourage people to put down anything they are willing to. Fine if they don't want to, but they'll reap benefits for so doing in terms of running speed, compilation speed, and error diagnostics. Additionally, I think short names for these operators would make people less-afraid of using them. Maybe (FIX-I exp) and (FLO-I exp) ...? Anyway, I don't think MAX/MAXIMUM deserve two names. Neither do ^$ and EXPT. As often as ^$ has ever been used in Maclisp, I doubt that a (DEFMACRO ^$ (X Y) (EXPT (FLO-I X) (FIX-I Y))) wouldn't have been just as good. -kmp ps Perhaps more flexible control of open-coding would be good. Eg, one or more of these ideas might be useful: * MAX wouldn't open code unless you did (DECLARE (SHOULD-OPEN-CODE MAX)) or would unless you did (DECLARE (SHUDNT-OPEN-CODE MAX)) You could force the opposite of the default by saying (PLEASE-OPEN-CODE ...) or (REFRAIN-FROM-OPEN-CODING ...) -- I'm not sure if the ... should be the function or the form. * (DECLARE (CONSERVE SPACE)) or TIME or STACK or whatever... so that the compiler would open-code iff it would be more efficient for CONSERVE'd resources... An alternate flavor on this would be (DECLARE (CONSERVE-IN-ORDER-OF-PRIORITY STACK SPACE HUNK8-SPACE MULTICS-FUNNY-MONEY TIME GCTIME ...))  Date: 3 December 1980 11:17-EST From: Jon L White To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: Open-coding, and proliferation of function names Although this discussion started out over MOON's observation (10 years late!) that MAX and MIN are still not open-coded in MacLISP, the general question came up about function names for the "fixnum-only" versions of arithmetic. Date: 2 December 1980 18:41-EST From: Kent M. Pitman Subject: Special names for Open-Coded Functions Date: 2 December 1980 1034-EST (Tuesday) From: Guy.Steele at CMU-10A To: Moon@AI, Bug-Lisp Re: MAX not open-coded . . . In some kind of theoretical discussion, it might be nice to remark that GREATERP and MAX can be based solely over the generic functions along with type information, what conceivable practical value would accrue by flushing the type-specific names, or by turning them into macros which generate the type-declared generic code? You'd still have names like "<" and "<$" (the latter for flonum LESSP -- useful primarily on machines like the VAX where floating-point comparisons cannot be done with the same instructions as fixed-point comparisons). But in fact you go way astray in picking PLUS for an example -- the "+" function is significantly different from PLUS. There is no way to make a mechanical conversion from "+" into PLUS, since the former is some kind of modular arithmetic, and the latter is one-to-one. It is primarily a convenience for efficiency that a programmer can substitute "+" for PLUS, when he knows that the two will have the same result over the paritcular limited domain of application.  Date: 4 December 1980 00:49-EST From: Kent M. Pitman To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Open-coding, and proliferation of function names Date: 3 December 1980 11:17-EST From: Jon L White But in fact you go way astray in picking PLUS for an example -- the "+" function is significantly different from PLUS. ----- Ah, but I used them EXPLICITLY as examples of functions that don't fall into this category. You didn't read carefully. -kmp ps to those who didn't already catch it in my previous message. There's a missing (- ...) which ought to be inside a FIXNUM-IDENTITY in one of the displays. If you don't see it, your mind is probably making the right patch automatically. If you noticed it, please assume I meant the obviously correct thing ...  Date: 4 December 1980 1424-EST (Thursday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: Open-coding, and proliferation of function names Maybe I don't remember the message properly, but I think KMP did not go astray -- I thought he was making the very point that while MAX can be mechanically converted, PLUS cannot, and therefore PLUS has more right to an allter-ego than MAX does. And indeed, as he points out, the compiler is in error in converting (PLUS (FIXNUM-IDENTITY X) 3) into (+ X 3), if this is in fact what it does. I might point out that when I suggested MAXIMUM/MAX/MAX$, the xxx-IDENTITY functions didn't exist. I agree with KMP's analysis, but with JONL's conclusions. xxx-IDENTITY -- or even FIX-I and FLO-I (flatfoot FIX-I with a FLO-I FLO-I) -- is too painful to type all over the place, and within their limited domain MAX$ and + and >$ are extremely convenient and useful, whether or not they are considered "primitive" or "macro".  Date: 4 December 1980 18:22-EST From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: JONL at MIT-MC, LISP-FORUM at MIT-MC Re: Open-coding, and proliferation of function names Date: 4 December 1980 1424-EST (Thursday) From: Guy.Steele at CMU-10A xxx-IDENTITY -- or even FIX-I and FLO-I (flatfoot FIX-I with a FLO-I FLO-I) -- is too painful to type all over the place, and within their limited domain MAX$ and + and >$ are extremely convenient and useful, whether or not they are considered "primitive" or "macro". Who said anything about typing them all over the place? You'd do (DECLARE (FIXNUM ...)) for variables, so you'd never need (FIX-I ...) around a variable except in rare (probably uninteresting) cases where it happens to have a fixnum for an instant and you happen to know that. As for things like (GREATERP (+ I J) K) if K was declared FIXNUM, the rest would be inferred easily. So in mathy expressions, very few FIX-I's would be needed either. As for structure references, you might find (GREATERP (FIX-I (CAR X)) (FIX-I (CAR Y))) pretty ugly, but probably you should have used a typed data accessor anyway. Eg, (DEFMACRO FOO (X) `(FIX-I (CAR ,X))) so that you could say (GREATERP (FOO X) (FOO Y)). In fact, if GREATERP and > were synonymous, you could have said (> (FOO X) (FOO Y)), thus reducing the net number of chars (if you're into that). It's probably true that greaterp tests occur often enough that several flavors for brevity might be nice to provide. I'm not convinced that this is true, though, of MAX -- a function I have probably only ever used four or five times in all my Lisp coding. Maybe other people use MAX more often. But unless someone can show me some real applications (plural) where MAX is really used more than once or twice a file, I'll remain unconvinced as to the need for a name split on that function. It would leave open the question of why there shouldn't be a different function for SIN of fixnums, for CHAR-UPCASE of chars known to be in the alphabetic range, for IO from a known SFA, etc. I would like such declarations to be available, but I don't think that building them into the names -- except for the most common idioms -- is a good idea.  Date: 5 December 1980 2150-EST (Friday) From: Guy.Steele at CMU-10A To: Kent M. Pitman cc: lisp-forum at MIT-MC Re: Open-coding, and proliferation of function names I agree that names should not proliferate too greatly. However, I also believe that consistency dictates that a convention be applied reasonably uniformly. If you can attach a "FIX-I" or "FLO-I" tag to most common arithmetic functions by spelling it differently in an obvious way, then you ought to be able to do it for all, rather than having to remember exceptions. (Actually, GCD is used possibly even less often than MAX, but we have \\, do we not?) #Date: 31 January 1981 06:35-EST From: Jon L White To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: Flush Old I/O symbols in MacLISP? Date: 30 January 1981 19:52-EST From: George J. Carrette Subject: Flushable functions? I think these symbols are flushable from the default environment: UWRITE, UFILE, UKILL, UREAD, UPROBE, UAPPEND, CRUNIT, UCLOSE. I can't believe that any system code depends on them or that any large-system programmer couldn't do without them. For people with old-code I think a well-advertised old-maclisp compatibility file would suffice. RWK has suggested an autoloadable function which would simply putprop a lot of autoload properties for "useful" MacLISP facilities. A long time ago, such a thing existed on the LIBLSP directory in order to put autoload properties for each function thereon.  Date: 31 JAN 1981 0933-EST From: RMS at MIT-AI (Richard M. Stallman) Re: optimizers Date: 30 January 1981 19:28-EST From: George J. Carrette The way OPTIMIZERS where originally implemented on the lisp-machine was that the function of the same name must have been defined first before the optimizer could be called. I'm the original implementer of OPTIMIZERS, and I don't think this was ever so. It certainly isn't true now (from looking at the code).  Date: 1 February 1981 04:46-EST From: Robert W. Kerns To: JONL at MIT-MC cc: GJC at MIT-MC, LISP-FORUM at MIT-MC Re: Flush Old I/O symbols in MacLISP? Date: 31 January 1981 06:35-EST From: Jon L White Date: 30 January 1981 19:52-EST From: George J. Carrette Subject: Flushable functions? I think these symbols are flushable from the default environment: UWRITE, UFILE, UKILL, UREAD, UPROBE, UAPPEND, CRUNIT, UCLOSE. I can't believe that any system code depends on them or that any large-system programmer couldn't do without them. For people with old-code I think a well-advertised old-maclisp compatibility file would suffice. RWK has suggested an autoloadable function which would simply putprop a lot of autoload properties for "useful" MacLISP facilities. A long time ago, such a thing existed on the LIBLSP directory in order to put autoload properties for each function thereon. MACSYMA still uses these, unless someone has finally changed them very recently. Not that it should be converted.  Date: 2 February 1981 10:54-EST From: James E. O'Dell To: RWK at MIT-MC cc: GJC at MIT-MC, JONL at MIT-MC, LISP-FORUM at MIT-MC, JPG at MIT-MC Re: Flush Old I/O symbols in MacLISP? I think these symbols are flushable from the default environment: UWRITE, UFILE, UKILL, UREAD, UPROBE, UAPPEND, CRUNIT, UCLOSE.......... MACSYMA still uses these, unless someone has finally changed them very recently. Not that it should be converted.............. I recently wrote a writefile for Multics that uses new i/o because old i/o on multics didn't work. If you guys want to flush this stuff I could probably hack one for ITS. This might be one of the last places its used in Macysma but I'm not sure I'd have too check or JPG would have to tell us.  Date: 2 February 1981 17:07-EST From: Jeffrey P. Golden To: JIM at MIT-MC cc: LISP-FORUM at MIT-MC, RWK at MIT-MC, GJC at MIT-MC, JONL at MIT-MC, KMP at MIT-MC Re: Old IO symbols in MACSYMA I believe all that's left now is one call each to UWRITE, UFILE, and UAPPEND in JPG;SUPRV . MACSYMA IO is in a state of flux now, so I am not sure how long they will remain. (Obviously they could be converted to NewIO if someone felt that was urgent. The calls are in PDP10-only code.) Anyway, I don't think this should have any bearing on whether the Old IO symbols remain in the default LISP environment. @Date: 30 January 1981 10:00-EST From: Jon L White To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: Compatible? with whom? I presume your note below wanted to go to LISP-FORUM rather than BUG-LISP. Date: 30 January 1981 00:34-EST From: Richard M. Stallman I hear that Maclisp now has something like the Lisp machine's user-supplied compiler optimizers. But it is not compatible with the Lisp machine. I think it ought to be changed to be compatible, especially since it isn't in use by users yet. How about letting the LISPM fix its idea of "optimizers" first? For example, 1) each "optimizing/trans' function should return two values, the second saying whether or not it did anything. (displacing macros are still used in the MacLISP world, so EQ tests can't be relied upon.) 2) Let's get a common DEFTRANS name, which not only push's something onto the appropriate property of a symbol, but also takes some other action if the symbol is not FBOUNDP (e.g. defining a MACRO for it which warns you in interpretive code when you try to use it; subsequent SUBR definitions would 'wipe-out' this warning macro). 3) DEFMACRO could check for "optimizers/trans" properties; or alternatively the DEFTRANS feature could put a MACRO property (in MacLISP). 4) "optimizers" is a bad misnomer -- these things are purely source to source transformations, and have nothing to do with optimizing (and in fact aren't even limited to compilation, although that seems to be a reasonable convention for now). How about SOURCE-TRANS? or maybe even COMPILATION-MACRO? In fact, if item 2 is adopted, then it doesn't matter what the property name is called, since it should remain invisible to the user.  Date: 30 January 1981 10:28-EST From: George J. Carrette To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC, RMS at MIT-MC Re: Compatible? with whom? I very much agree with JONL that the exact internal semantics of SOURCE-TRANS or COMPILER:OPTIMIZERS or whatever it is, is fairly unimportant as long as the same basic feature is available. Nobody would be writing something like (PUSH #'(LAMBDA (FORM) ...) (GET 'FOO 'COMPILER:OPTIMIZERS)) in source code anyway. Perhaps it would simplifiy things to PUNT the MACRO PROPERTY, at least for the compiler? This would give the compiler less things to check for. As to the features of SOURCE-TRANS, I think it would be nice if the property called got a second descriptive argument telling EVAL-FOR-VALUE EVAL-FOR-EFFECT, EVAL-FOR-PREDICATE. Certainly the present Maclisp/NIL implementation of PUSH/POP could use this info. Also, special mapping frobs, some for value, some for effect, would not be needed. I have tried some examples of this in "K", and it certainly cannot take the consing out of (PROGN (DO ((L L (CDR L))(N NIL (CONS (F (CAR L) N))) ((NULL L) N)) NIL) -gjc  Date: 30 January 1981 10:43-EST From: George J. Carrette To: RMS at MIT-AI cc: LISP-FORUM at MIT-MC I like your idea of putting names of optimizing-frobs in/near the functional definition. "redefinition" is a funny thing. If lisp were a compiled-only language then it would be simple to have lexical macro or functional redefinitions, however, given that the usual thing (i.e. Lispm & Maclisp) is to have the interpreter be incompatable, using dynamic scoping exclusively, for efficiency, then we have a messy problem. [Maybe super-special-cases in the compiler, e.g. CAR/CDR (maybe) should be untouchable by the user, in that no redefintion mechanisms for them are seriously supported? Think about how even lexicality is destroyed by macro-expansions.] The complexity of the options involved sounds like a flavour interface to the COMPILER (oh great beast) is called for? (oh my...) -gjc  Date: 30 January 1981 1351-EST (Friday) From: Guy.Steele at CMU-10A Re: Source-to-source transformations @Begin(Flame) Well, certainly not all optimizers are source-to-source, but then again just because a transform is source-to-source doesn't mean it isn't an optimizer. @End(Flame) @Begin(Criteria) If a special construct is adopted for defining optimizers, or transformers, or whatever they are to be called, I think these criteria should be observed: @Begin(Itemize) The name of the construct should begin with "DEF", primarily for the sake of @. The abbreviation TRANS should be avoided as being too ambiguous (transform? translate? transsubstantiate? translucent? transporter?). @End(Itemize) Additionally, compatibility between all dialects concerned is reasonably important, but may not be critical. Should this be released as a user-available feature, or is it for use primarily by the compiler writer? Currently most such transforms are indeed written with an eye to the eventually produced machine code. The user already has macros. Introducing source-to-source transforms provides the user with yet another way to make compiler and interpreter behave differently if we aren't careful. @End(Criteria) @Begin(Opinion) If we must have it, let's call it DEFTRANSFORM. Let it not become too hairy. Letting it know the evaluation context (EVAL, COND, or PROGN? so to speak) isn't a bad idea. In fact, maybe macros might want to have such an argument also? @End(Opinion)  Date: 30 JAN 1981 1600-EST From: RMS at MIT-AI (Richard M. Stallman) Re: optimizers I can't understand all of the things that JONL says are wrong with optimizers on the Lispm. As far as I do understand them, 1) There is no problem with EQ tests. Displacing macros have nothing to do with this -- the Lisp machine also has them, and that proves experimentally that they are no problem. The reason they are no problem is that optimizers aren't macros and we don't have (or need) displacing optimizers (since they only affect compilation). I think it is much cleaner just to return the transformed form (not to mention much better for compatibility!) so I'm against changing them. 2) I would not object to a def.. function to define optimizers. I don't see why it should warn about putting optimizers on something which is not FBOUNDP. I don't think that the order of defining the optimizers vs the function should matter. Also I don't understand the suggestion about creating a definition which prints a warning in interpreted code. The lack of a definition ought to do that anyway. 3) I don't believe it is right for DEFMACRO to check in any way for the existence of optimizers, because it is wrong for redefining the function (either macro or not) to flush the optimizers. If I re-eval a corrected definition of a function, I don't want that to flush the optimizers. 4) "optimizers" is precisely appropriate for source transformations which happen only in the compiler. These are used for nothing except optimization of things which could instead compile into function calls that would be much slower (and might require some variables to be special). For example, this is how DO and MAP get open-coded. And a lot of other things that would be "cretinous code" but not complete disasters if they were not optimized. If we decide also to have source to source transformations that also apply in the interpreter, then they could be essential parts of the definition of something, and they shouldn't be called "optimizers". In any case they could not be called "optimizers" because the existing optimizers are definitely intended to be used only by the compiler. Anything which is used only by the compiler has to be a form of optimization since it can't be an essential part of the definition of the function it applies to. PS. The reason why just returning the transformed form is better for compatibility is that it doesn't require multiple values.  Date: 30 January 1981 16:32-EST From: Glenn S. Burke To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: optimizers I agree completely with RMS, and i would like to add two points. For one, it is a complete loss that to utilize such a feature one must use multiple values, ESPECIALLY in Maclisp. I might feel differently if the support were small and efficient, but the nested macro expansions produced, and the quantity of code which produces them, is just too much for me to be able to use when others will be using my code. (The interpreter support for this, based on the existing Maclisp implementation, can be done in less than 200. words of binary program space; this involves lap-coded FSUBRs. The compile-time support (ie macro-definitions) takes somewhere around 500 words.) The second point, which is more of a suggestion, is that the DEFxxx should take a name for the particular optimization being defined, so that it can be appropriately clobbered if it gets redefined.  Date: 30 January 1981 17:31-EST From: Jon L White To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: FILTERs: Source-to-source code transformers Re your note about "can't understand ... [things] .. wrong with optimizers on the Lispm." Although I mentioned displace-ings in MacLISP, there are far more general side-effects than that (like updating a data-base); true, one could always frivously copy the cons-cell in order to signal that something was done, but isn't this the out-dated way of returning two items of information? (see point 4 below). I believe you may not fully appreciate why users, as opposed to the compiler-maintainers, want such a feature. Such a thing is "in demand" with users because they want a chance to "filter" certain source-code expressions without usurping any prior functional definitions; otherwise, a macro-definition would be just dandy. Thus one could transform (RETURN X Y) into (RETURN (VALUES X Y)) without getting into the kind of loop that a macro-definition for RETURN would cause. InterLISP, for over a decade, has had three varieties of macros, one of which corresponds roughtly to what I'm now calling "Filters". Its a loss that their Compiler Macros don't interpret; just ask any InteLISP user. As QUUX pointed out, there may be genuine interest in making these things interpretable -- the fact that YOU haven't used them that way doesn't argue against it. Such a "Filter" can do everything that a LISPM "optimizer" can -- if it's the intention of the LISPM community not to have the more general mechanism, then that seems reason enough to let the more general, user-oriented facility have a different name. Your postscript seems to imply that there is something wrong with multiple value; Really? Don't they work on the LISPM? "PS. The reason why just returning the transformed form is better for compatibility is that it doesn't require multiple values."  Date: 30 Jan 1981 1447-PST From: Dick Gabriel Re: Optimizers I think that this conversation is a little misguided. First, the optimizers are only for the compiler, not the interpreter. Second, there is a concept (source-to-source transformations) plus a control structure (re-application until quiescence). In the first case, people should not mind that compile cpu-time is sacrificed for runtime cpu-time. In the second case, the ordered list of optimizers is re-scanned until none applies. Since one can optimize an expression to any value (in theory), one needs a second value to report the success of the attempt. I do not see the completeness of this loss as clearly as some others do. -rpg  Date: 30 January 1981 18:38-EST From: Jon L White To: GSB at MIT-MC cc: LISP-FORUM at MIT-MC Re: Use of MULTIPLE-VALUE and VALUES Re your conjecture: Date: 30 January 1981 16:32-EST From: Glenn S. Burke Subject: optimizers . . . For one, it is a complete loss that to utilize such a feature one must use multiple values, ESPECIALLY in Maclisp. I might feel differently if the support were small and efficient, but . . . I don't know if there is a polite way to say this, but you seem to have lost all perspective on this facility: (VALUES x y) Compiles into code that is exactly 2 (simple) instructions executions longer than merely setq'ing X and Y (MULTIPLE-VALUE (X Y) ( ...)) Compiles into code that is exactly 3 (simple) instructions executions longer than merely calling the function and setq'ing X and Y, except when there are too few return values, in which case an ** autoloadable ** function would be called. (Even the out-of-core file needn't be loaded if one supplies his own "patch-over" function here). (Checking for failure to return "multiple-values" could be done, at a cost of about another 2 instructions, but this check is debatable.) If the top cell of an arg to "Filter/optimizer" is to be copied, then surely that copying itself will take more time than those few instructions. If one has 50 (!) "Filter/optimizer" functions in his system, *** and if it could be proven that the two-value return is indeed two words longer than any other alternative **, then he will lose all of 100 words out of his address space. Really big deal. Using VALUES and MULTIPLE-VALUE doesn't require one to use VALUES-LIST or MULTIPLE-VALUE-LIST; but even if one did use them, I don't think a MacLISP implementation of them could be made significantly smaller and more efficient. Also, the macro support lives in the MLMAC file, which isn't generally loaded in a user's system; with all that space reclaimed in the compiler recently, how could one carp about a couple hundred words of new macros? especially for such a winning idea as multiple values.  Date: 30 January 1981 18:44-EST From: Glenn S. Burke To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Use of MULTIPLE-VALUE and VALUES Date: 30 JAN 1981 1838-EST From: JONL at MIT-MC (Jon L White) ... Using VALUES and MULTIPLE-VALUE doesn't require one to use VALUES-LIST or MULTIPLE-VALUE-LIST; but even if one did use them, I don't think a MacLISP implementation of them could be made significantly smaller and more efficient. Also, the macro support lives in the MLMAC file, which isn't generally loaded in a user's system; with all that space reclaimed in the compiler recently, how could one carp about a couple hundred words of new macros? especially for such a winning idea as multiple values. What space reclaimed recently? Are you really serious? Are you saying that since you are going to be kind enough not to have GRIND, TRACE, and every conceivable Maclisp utility loaded in it doesn't matter that you throw in some new frills? I have had LSB using XCOMPLR recently ONLY because the cretin who put together that abortion with the World loaded didn't have the consideration to install something reasonable in its place. I am NOT comparing the current XCOMPLR to that travesty currently installed as COMPLR, i am comparing it to COMPLR of one week ago. We have had address space problems for months, it didn't start just when someone tried to optimize sharing and pessimize memfree.  Date: 30 January 1981 18:57-EST From: Glenn S. Burke To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Use of MULTIPLE-VALUE and VALUES As to my "conjecture" and me missing the point, i was not ONCE talking about instructions executed, i was talking about code produced. And i was also not just talking about support in the Lisp interpreter, i am also concerned with the compiler. PLEASE don't try to confuse the issue. Is it not the case that XCOMPLR already has all of these nifty features pre-loaded? Did it occur to you that just maybe the support necessary for this complete winnitude might just overshadow the savings they will provide in PDP-10 Maclisp, which just can't fit so much stuff that most users won't see these savings realized? Especially from the standpoint of users who DON'T use these features, and aren't interested in them. But their compiler has them anyway.  Date: 30 January 1981 19:22-EST From: George J. Carrette To: GSB at MIT-ML cc: JONL at MIT-MC, LISP-FORUM at MIT-MC Re: Use of MULTIPLE-VALUE and VALUES Glenn, sounds like you have tough problems. Can't you work out something with JONL for custom modularizations of the COMPLR? I'm sure it will take a lot of work to figure out exactly what to load, but it seems you should be able to come up with something which had what you wanted without the NIL-related stuff. As I understand it, you run LSB on ML, right? I think I'm partially to blame for suggesting the optimization of sharing and pessimiziation as you put it of memfree in complr of a few weeks ago. On MC it was nice to have as much sharing as possible between MCL (macsyma COMPLRS), COMPLRS, and random lisps being used for development and hacking. Its really bad to hear all this fighting about address space. -gjc  Date: 30 January 1981 19:29-EST From: Jon L White To: GJC at MIT-MC cc: GSB at MIT-MC, LISP-FORUM at MIT-MC Re: COMPLR address space None of Glenn's problems are due to NIL stuff. None of the new MacLISP development is particularly NIL stuff (multiple-values have been on the LISPM for years). Indeed, the "intermediate" MACLISP dump cost us more than 7K of address space, and is being dropped. As soon as agreement is reached about XLISP, then XCOMPLR will replace the currently bloated complr.  Date: 30 January 1981 19:28-EST From: George J. Carrette To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: optimizers The way OPTIMIZERS where originally implemented on the lisp-machine was that the function of the same name must have been defined first before the optimizer could be called.  Date: 30 January 1981 21:48-EST From: Daniel L. Weinreb Re: Optimizers and macros Users are always wanting to override system functions. If you decide to change the meaning of some function, macro, or symbol that names a special form, then ideally you should be able to do that, and if there is an optimizer sitting around that "knows" the meaning of the symbol, and you change the meaning, then that optimizer had better be flushed. The system can't tell the difference between a DEFUN (or DEFMACRO, et. al.) that is changing the meaning of something, and one that is merely a new version with a bug-fix, and so it can't tell whether to wipe out the optimizers. Schemes based on remembering what file things were defined on are very prone to confusing effects (what if you type in a new definition at Lisp top level? What if you have a patch file that is only making bug fixes?) RMS's suggestion seems to me like the best way to solve this problem: have the definition of the function (or macro) include, textually, the names of optimizers to use, and have those be free-standing functions. This has all the right properties. There is no problem with EQ checking to figure out whether an optimizer did something, since as RMS pointed out there is no point in making a displacing optimizer; they only happen at compile-time anyway. In fact, the Lisp Machine compiler actually uses EQ checking to figure out when to stop macro expanding, and so does not actually allow old-style displacing macros, as it happens. GLS's and GSB's suggestions about DEFTRANSFORM are good, but if we do things as RMS suggested (using local declarations) then there is no need for it. I actually think the name OPTIMIZER is good. It stresses the fact that, as RMS said, it is critically important that the optimizer does not change the functionality of the form is is transforming AT ALL. These things are NOT like macros; they must not do anything semantically interesting, but merely produce a different (and presumably more efficient) way to do exactly the same thing. This is because they only happen in the compiler; it must not make any difference to the semantics of the program if they do not get run. Only the efficiency may be affected. Talk about making these things work in the interpreter is misguided: we already have source-to-source transformations that also work in the interpreter, and they are called macros. Macros are the right thing for doing actual intersting transformations; optimizers are for when you know something will run faster compiled if it is re-expressed in a certain way.  Date: 30 January 1981 21:51-EST From: Daniel L. Weinreb Re: P.S., regarding optimizers I forgot to say: the reason that associating the optimizers with the definition the way RMS suggested is right is that the optimizer must, by its nature, go along with the definition. If the definition is changed, the optimizer may need to be changed too. They both understand the contract of the function, and if it is changed then they may both have to be changed. Therefore, they belong together.  Date: 30 January 1981 22:06-EST From: George J. Carrette To: dlw at MIT-AI cc: LISP-FORUM at MIT-AI Re: Optimizers and macros In your last note you forgot about one example of redefining system functions which was a very important motiviation for having a SOURCE-TRANSFORMATION which had nothing to do with compilation, and which could not be supported via macros alone. [Does this parse?] The example is when somebody wants to "extend" a system built-in, not "redefine it". That is, he only wants to define certain syntactical uses of it, but still be able to access the built-in. This sort of thing is very good to have in COMPATIBILITY-PACKAGES. e.g. (RETURN X) is still (RETURN X), but (RETURN X Y) might be (RETURN (VALUES X Y)) [to beat a live horse]. You state "users are always wanting to override system functions", which could be true, but I have seen that "users are always wanting to extend system functions." -gjc p.s. I don't think we need any Axe-grinding about EQ'test vs second-value returning.  Date: 30 January 1981 2330-EST (Friday) From: Guy.Steele at CMU-10A Re: Optimizers I must agree that the best place to mention optimizers is right in the function definition. So what is the syntax going to be? DEFUN is quite cluttered enough as it is, what with &OPTIONAL, &REST, and that completely awful-looking &AUX; (DEFUN (FOO BAR) ...) and (DEFUN (FOO BAR BAZ) ...); declarations at the head of the body; and documentation strings at the head of the body. One can already in principle write things like: (DEFUN (FOO BAR BAZ) (A &OPTIONAL (B (HACK A)) &REST Z &AUX EIGHTEEN-NUTTY-THINGS) "This function is not documented because it is too ridiculous to explain." (DECLARE (FIXNUM B)) (DECLARE (FLONUM A)) (DECLARE (SPECIAL Z)) (PRINT "HELP! The definition is down here somewhere!")) Well, to make a bad situation worse, I observe that just as strings as not-the-last-thing of a lambda-body aren't too useful, neither are symbols -- so we can usurp that case as part of the DEFUN syntax (*not* as part of the general DEFUN syntax). So for the sake of complete ridiculousness I propose that if the first thing after the argument list is a symbol not at the end of the body, then it and the next sexp form a property-value pair to be DEFPROP's; and iterate. So one might have: (DEFUN (FOO BAR BAZ) (A &OPTIONAL (B (HACK A)) &REST Z &AUX EIGHTEEN-NUTTY-THINGS) OPTIMIZERS (FOO-OPTIMIZER PUTPROP-OPTIMIZER GROSS-OPTIMIZER) DOCUMENTATION "This function is not documented because it is too ridiculous to explain." COMMUTATIVE T ASSOCIATIVE RIGHT GRIND-FORMAT ((GLEEP) 4 %? ((GRUBBETZ) 4$) ) (DECLARE (FIXNUM B)) (DECLARE (FLONUM A)) (DECLARE (SPECIAL Z)) (PRINT "This definition hasn't a SNOBOL's chance of working!")) So, **PLEASE**! SOMEBODY, QUICK, THINK OF SOMETHING BETTER THAN THIS, BECAUSE THIS IS AWFUL! --Quux  Date: 31 January 1981 00:17-EST From: Kent M. Pitman Re: Thoughts on GLS's (non)suggestion Fortunately, (DEFUN FOO (X Y) X ; ignored (FROB Y)) would be broken by such a mechanism... saved from one crock by another. Here's another (probably losing) suggestion -- you could give DEF-FORMs a plist area of their own. So that (DEFUN FOO (X) X) ;has no plist (DEFUN (FOO BAR) (X) ...) ;is handled specially as explained below (DEFUN (FOO . (tag1 val1 tag2 val2 ...)) arglist . body) ;is the ;general form The case of (FOO BAR) wouldn't show up (being an odd-length prop list), so would be made to mean (PROP-NAME BAR) or something like that. Pick whatever label you want. The more general form would be (DEFUN (FOO PROP-NAME x) ...) So definitional properties would all live in this area. You might say, for example, (DEFUN (FOO PROP-NAME FOO-EXPR OPTIMIZERS (...) COMPILE-ME T ;This would replace DEFMACRO-FOR-COMPILING ...) arglist . body) This would offer an alley for unifying the Maclisp and LispM defmacro incompatibilities, since it offers a way for both parties to get the functionality they want. The few users of (DEFUN (FOO exprprop subrprop) ..) available in Maclisp right now would have to change this to (DEFUN (FOO PROP-NAME exprprop COMPILED-PROP-NAME subrprop) arglist . body)  Date: 31 January 1981 00:29-EST From: George J. Carrette To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Re: Thoughts on GLS's (non)suggestion JAR isn't on line very often, so maybe I can tell you what he told me when he was thinking about this very problem as he first started working on the NIL project. It was his intention to have the Interpreter and Compiler use the CLASS system which was native to NIL, in a natural way for controlling all the semantics of all macroexpansions/transformations/ optimizations which the system and user would be privi to. This means you would not have to be as simpleminded as some of the suggestions lately, but still be able to win if the CLASS system and filesystem {whatever communication channels are} was reliable {understandable}. -my opinion. -gjc  Date: 31 Jan 1981 0742-MST From: Griss at UTAH-20 (Martin.Griss) Re: Compilation in Standard LISP Any function in Standard LISP with Portable LISP compiler may have a 'COMPFN property, which causes this function to be compiled other than simply to expand macros and then recursively invoke COMVAL. The user of course has to know how the compiler works, and then writes a function that compiles the construct as desired , puting under the COMPFN property. We also have a PASS-1, source to source transformation phase, and can associate a PA1FN with any construct, cause it to compile differently (hoepfully better!). Other wise MACROS simply return a new form that is compiled in place of old form (as it is interpreted in place of old form by EVAL). This scheme makes macros work cleany for EVAL'ed code, and has them compiled OPEN in compiled code; there is still the PA1FN or COMPFN for further compilation optimisation. M griss  Date: 31 January 1981 09:48-EST From: Richard M. Stallman To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Snide remarks Your postscript seems to imply that there is something wrong with multiple value; Really? Don't they work on the LISPM? "PS. The reason why just returning the transformed form is better for compatibility is that it doesn't require multiple values." I thought that Maclisp didn't have multiple value returning, and I was concerned about compatibility with it. Maybe it has been implemented in Maclisp. I could easily have forgotten the announcement. If so, I think it's a good thing that it was. Meanwhile, my concern with compatibility with a system you maintain still does not deserve contempt from you even if it was based on a mistaken factual belief.  Date: 31 January 1981 10:12-EST From: Jon L White To: RMS at MIT-MC cc: LISP-FORUM at MIT-MC Re: "snide remarks"? Not really. There are several people, who for a long time, have know about MacLISP's multiple value stuff -- originally called MSETQ-CALL etc -- who are afraid to use them. It's not "contempt", Rich, to prod you to say what you think is wrong with the implementation. But that really isn't the issue -- its one of using a modern tool when available; as RPG noted in his comment, there really are two independent pieces of information being returned in each scheme ("optimizers" or SOURCE-TRANS's) so that the called function can cooperate with loop which is iterating over a list of actions until stabililty is reached. Being required to cons-copy the argumen in order to signal that is like ERRSET having to listify its answer.  Date: 31 January 1981 10:37-EST From: Richard M. Stallman To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: Filters Such a "Filter" can do everything that a LISPM "optimizer" can -- if it's the intention of the LISPM community not to have the more general mechanism, then that seems reason enough to let the more general, user-oriented facility have a different name. These filters are a good idea, and I wouldn't mind if we had them. But one thing filters are not so good for is WORKING ONLY IN THE COMPILER. There is a definite need for transformations which apply only in the compiler, and that's why I implemented the OPTIMIZERS property. By emphasizing the need for these, I'm not trying to oppose also allowing transformations that work in the interpreter too. THIS feature, the feature of transformations in the compiler only, really does deserve to be called "optimizers". The other feature, of transformations that would work in the interpreter too, certainly should not be called "optimizers". If they are what you have in mind, then we are talking about two different things and not really disagreeing.  Date: 31 JAN 1981 1048-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Idea: New way for optimizers to work If the user redefines a function because he wants to recycle the name, and he doesn't care about the old meaning, then the right thing is to get rid of old optimizers. If the user redefines a function because he is fixing a bug in the old definition, or adding a feature, he probably wants to keep the old optimizers (though he may need to fix them as well). I don't think there is any hope that the system can second-guess the user and always do the right thing. So here's a way that the user can conveniently tell the system which is right: Perhaps the optimizers should be properties of the definition rather than of the function name. That is, they could live in the "debugging info alist" which is essentially a plist for the function. To make this win, the optimizer definitions would have to appear with the function or macro definition. They could not be put on separately. Then re-evalling the same definition, modified, would re-install the same old optimizers, whereas a totally separate definition would have its own list of optimizers. The usual style would be to define separate functions to do the work of the optimizers and supply their names in a local declaration in the definition of the function to be optimized. Any comments?  Date: 31 January 1981 11:52-EST From: George J. Carrette To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Re: optimizers Date: 31 JAN 1981 0933-EST From: RMS at MIT-AI (Richard M. Stallman) Date: 30 January 1981 19:28-EST From: George J. Carrette The way OPTIMIZERS where originally implemented on the lisp-machine was that the function of the same name must have been defined first before the optimizer could be called. I'm the original implementer of OPTIMIZERS, and I don't think this was ever so. It certainly isn't true now (from looking at the code). It was so the first time I tried to use them in compiling macsyma. [A few months ago.] Moon fixed it.  Date: 31 JAN 1981 1213-EST From: RMS at MIT-AI (Richard M. Stallman) Re: Syntax for optimizers in function definition The way other such things are done on the Lispm now is with local declarations. These can either use LOCAL-DECLARE or look like Maclisp local declarations at the front of the function, because anything of the latter format is converted to the former. Thus: (local-declare (arglist a b c &rest d) (defun foo (&rest list-of-all-args) (lexpr-funcall 'mumble t list-of-all-args))) or (defun foo (&rest list-of-all-args) (declare (arglist a b c &rest d)) (lexpr-funcall 'mumble t list-of-all-args))) I personally prefer the latter. So we could have (defun bar (x y) (declare (optimizers bar-into-quux-maybe bar-into-noop-maybe)) ...)  Date: 31 JAN 1981 1853-EST From: DLW at MIT-AI (Daniel L. Weinreb) Re: Optimizers Clearly the syntax for optimizers is to use local declarations. This is the general mechanism used for all such pieces of information that are to be passed to the compiler.  Date: 31 January 1981 19:07-EST From: Daniel L. Weinreb To: Griss at UTAH-20 cc: lisp-forum at MIT-MC Re: Compilation in Standard LISP Do I understand correctly that your feature involves having users write functions that are essentially part of the compiler? What would happen if someone were to completely redesign the compiler and all its data structures? We are planning to do this when we build the next generation of Lisp machine, and should have no problem with portability of code; it seems like your system would not be so portable in this circumstance.  Date: 2 Feb 1981 12:11 PST From: Deutsch at PARC-MAXC To: Griss at UTAH-20 (Martin.Griss) cc: lisp-forum at MIT-MC Re: Compilation in Standard LISP Interlisp currently has the ability for users to define "macros" that call internal parts of the compiler directly. This has turned out to be a TERRIBLE idea -- it depends on all kinds of fragile, wizardly knowledge about the compilation environment, and its value is just about nil now that the compiler does a few more things built-in. If I had my druthers I'd take this "feature" out and replace it with Maclisp/LMLisp-style source transformation. SDate: 13 January 1981 1704-EST (Tuesday) From: Guy.Steele at CMU-10A Re: Canonicalization of predicate oredering in DEL, MEM, etc. Up to now, the functions DEL, MEM, etc. have been thought of as taking an equality predicate so that one could write, for example, (MEM #'= X Y) or (DEL #'STRING-EQUAL X Y). However, the semantics could be usefully extended to allow ordering operators: (MEM #'LESSP 3 X) clearly looks for an element of X which is less than 3... or which 3 is less than. Which? I suggest that part of the definition be that the item and the list element are fed to the predicate *in that order*, to resolve the ambiguity. (I propose that order rather than the other order because the item and list arguments appear in that order.) Therefore (MEM #'LESSP 3 X) would look for an element that 3 is less than. Does anyone object to this becoming standard? I realize that poone could write (MEM-IF #'(LAMBDA (Y) (LESSP 3 Y)) X), but if that 3 is a local variable then either MEM-IF must be open-coded or you get the funarg problem... (grr.)  Date: 13 January 1981 22:42-EST From: Daniel L. Weinreb To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC Re: Canonicalization of predicate oredering in DEL, MEM, etc. This seems OK to me.  Date: 14 January 1981 03:21-EST From: David A. Moon To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Canonicalization of predicate oredering in DEL, MEM, etc. This is a good idea. I will update the Lisp machine documentation to recognize it explicitly (and make sure that all such functions in fact do the arguments in that order). mDate: 13 December 1980 15:10-EST From: George J. Carrette To: RWK at MIT-MC, JONL at MIT-MC cc: LISP-FORUM at MIT-MC Re: generalization of PUSH say that there was a GENERAL-PUSH macro, such that (defmacro PUSH (C L) `(GENERAL-PUSH CONS ,C ,L)) ;; As a matter of fact, it looks like you could trivialy ;; change NILCOM;SETF's definition of +INTERNAL-PUSH-X to ;; be general in this manner. ;; then I could define a favourite macro (defmacro INC (X) `(GENERAL-PUSH PLUS 1 ,X)) ;; It would be very nice to be able to do this, ;; since otherwise I have to ;; (1) Go through all the multiple-evaluation preventing hair ;; that you have already figured out. ;; OR ;; (2) Do it wrong, possibly being screwed by multiple evaluations ;; at some later date. -GJC p.s. POP is a candidate too, just by making CAR/CDR be parameters. Together these would be nice for stack implementations using arrays & other things. I've got a hacked version of NILCOM;SETF on my dir, and it works like a charm (in the compiler). Date: 24 November 1980 1441-EST (Monday) From: Guy.Steele at CMU-10A To: Moon at MIT-AI (David A. Moon) cc: lisp-forum at MIT-MC Re: Moon's proposed incompatible change to PUTPROP Oddly enough, I have been thinking about incompatible changes to the property list functions also. I would like to suggest that disembodied property lists never be rearranged, but that property lists of symbols be arbitrarily mangleable. (For example, GET might even try the bubble-up-the-list optimization hack.) However, I realize that this disparate treatment of lists and symbols is not too pleasant. Actually, what one really wants often for lists is not GET, but MEMQ-ALTERNATE, and so on. Maybe it was a mistake ever to let GET and PUTPROP work on lists (based as it was on the old "fact" that the PLIST was the CDR of a symbol). KDate: 20 December 1980 2333-EST (Saturday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC, Scott.Fahlman at CMU-10A, GJS at MIT-AI, rwg at SU-AI, Fateman at Berkeley Re: Once again a plea for slash I am firmly convinced that the MacLISP-and-sons community would be much better off if we were to change the "slash" character to something other than "/". The sooner we do it, the better. (1) Sussman and others report that the single greatest stumbling block for novice LISP users trying to do numerical tasks is remembering to double the "/" for the division operator. This is the only operator of the four standard arithmetic operators that does not have the same name used in almost every other programming language ("+", "-", "*", "/"). (2) The "/" character is common in English text. It looks odd to have to double it in error messages, etc. Have you ever written (FORMAT T "~S got an I/O error." FOO) only to get the error message FROBBOZ got an IO error. ? On the other hand, doesn't (FORMAT T "~S got an I//O error." FOO) look awfully silly? (I observe that in a recent note KMP instead wrote "I\O" for "I/O" to avoid this difficulty. I suspect this usage looks strange to all except users of APL\360. (3) I want to push for rational numbers as a standard part of LISP, supported along with bignums and floating point, with appropriate contagion rules and operations. An obvious syntax for a rational number -- the *only* obvious syntax -- is /, for example, 43/17. One might argue that this syntax could be accommodated as an exception anyway, but it would be an awfully kludgy exception. (4) At least one operating system, UNIX, uses the "/" character in pathnames. Other operating systems are likely to emulate this, given the popularity of UNIX plus the lower-caseness of "/"! For this reason, FRANZ LISP has used "\" as the quote character, so that one may use file names like "/usr/gls/win.lsp" rather than "//usr//gls//win.lsp". For all these reasons, I propose that we switch -- and soon -- to "\" as the standard character quoter. With the advent of " strings and | symbols, I think you may be surprised at how few / characters actually appear in code nowadays. This will ahve the following repercussions: (a) The name of the division function will be "/". (Actually, I further propose that "/" be the universal *correct* division operator, which floats (or uses rationals) if necessary to give a correct result. For example, (/ 7 2) => 3.5. Then let "//" be the one which produces integer results for integer inputs (or maybe always produces an integer result??). The name of remainder would change to "\\", and why don't we just flush integer GCD, as it is seldom used? (b) The #/ syntax would become #\. Following KMP's recent suggestion, #\ could take on the burden of both #/ and #\. (c) In other situations, / would become \ (for quoting characters). So how about it, people? Maybe it's not worth changing MacLISP, but I predict it would be a well worthwhile investment for LISPM, NIL, and SPICE LISP.  Date: 23 December 1980 20:10-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: GJS at MIT-AI, rwg at SU-AI, Guy.Steele at CMU-10A, Scott.Fahlman at CMU-10A, Fateman at BERKELEY, lisp-forum at MIT-MC Re: Once again a plea for slash I, too, assign a high benefit to changing the quoting character to not be slash. The problem of blocking novice users bothers me a lot; having to double slashes in text bothers me a lot too; leaving room for rationals is a good thing, though, as we just pointed out to one of our users, we will probably never get to the point where implementing rational numbers is the most important thing to be done next; Unix pathnames could certainly be an issue for some sites in the future. However, there are some problems in your plan. (1) Leaving Maclisp alone and changing the LM and NIL would be a disaster for Macsyma. I suppose Macsyma could doctor the Maclisp readtable before doing anything else, assuming there is some way to make this work in the compiler. (2) Making the / function do what you say is very unappealing. I think you are throwing around the word "correct" rather loosely, especially for someone who has, in the past, been careful to distinguish between "floating point numbers" and "real numbers". What is the "correct" result of (/ 1 3)? I would prefer to let // retain its old meaning and have / be the round-towards-negative-infinity kind of integer division that Bill Gosper has more than adequately demonstrated is superior to the current thing ("FORTRAN divide"). The implementation strategy for the Lisp Machine would be as follows: To keep thing working as much and as long as possible, several techniques can be used. Leaving "//" as a function named "//" is a good example of this. Probably all occurances of the function named "/", currently called "//", can be left as calls to the new function named "//", and they will never have to be changed if we don't want to. Other than this case, we rarely use slashes in symbol names. More often, we use them in strings. Where they are used to slashify slashes, it would not hurt terribly to leave things alone; we would just get funny messages with doubled slashes every so often, which could be fixed any time. The problem is where we use them to slashify double-quotes in strings; these would have to be fixed. There are also a few cases where backslashes are used explicitly in strings, like user-defined format items, though these are pretty rare. I think the easiest thing would be to generate a transistion system version in which both slash and backslash worked as quoting characters, and then use this to convert everything, finally removing slash when everything is converted. No matter how we do it, old Maclisp programs that are not converted will not work after the change unless they are read in on a special readtable. We should make such reading easy to do; I think this is a price worth paying to get rid of making slash quote. The real problem, of course, is that this conversion is a pain and a bunch of work. Someone has to do it to all the software. Of course, we have some pretty powerful versions of Tags Query Replace on the LM these days...  Date: 23 Dec 1980 21:55:37-PST From: CSVAX.fateman at Berkeley To: GJS at MIT-AI, HIC at MIT-MC, LISP-FORUM at MIT-MC, dlw at MIT-AI, rwg at SU-AI cc: fahlman at cmu-10a, guy.steele at cmu-10a Re: Once again a plea for slash some terminals don't have a back-slash. Does the multics typeball have it? (Of course I prefer back-slash for quoting a character as we have done in Franz.) What "/" should do is another matter. Adding exact rational numbers as a data type is not necessarily a good idea. unbounded precision is expensive if there is a long sequence of operations.  Date: 24 December 1980 01:16-EST From: Robert W. Kerns To: CSVAX.fateman at BERKELEY cc: GJS at MIT-MC, HIC at MIT-MC, LISP-FORUM at MIT-MC, DLW at MIT-MC, RWG at MIT-MC Re: Misc There is no such thing as a 'multics typeball'. Multics uses \ extensively as it's printed representation of non-printing characters. Maybe EBCIDC terminals don't have it, but then you have to map characters anyway. As for the partial-ascii TTY's; the few remain should be ignored. As to your objections to providing rationals because it can be too expensive in long operations, well, that's silly. We have BIGNUMs, don't we? Obviously, that's why we use a different operator; so we use rationals if that's what we want, or we write what we write now, and DON'T use them if that's what we want. Why prohibit a feature on the grounds that some people may find it too expensive? As long as I'm sending a message: I too would like to see '/' replaced.  Date: 24 Dec 1980 09:03:46-PST From: CSVAX.fateman at Berkeley To: RWK at MIT-MC cc: DLW at MIT-MC, GJS at MIT-MC, HIC at MIT-MC, LISP-FORUM at MIT-MC, rwg at mit-mc Re: rationals as a Lisp data type I do not object to a data type which is expensive and useful. I suspect however, that rationals will be found to be less useful than, for example, bigfloats or what Collins calls "binary rationals" (power of 2 denominators), or perhaps IEEE standard floating point.  Date: 24 December 1980 15:25-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: GJS at MIT-AI, rwg at SU-AI, CSVAX.fateman at BERKELEY, HIC at MIT-MC, LISP-FORUM at MIT-MC cc: fahlman at CMU-10A, guy.steele at CMU-10A Re: Slash: confusing the issue (1) All modern ASCII terminals have backslashes. Multics has changed since you left; it now uses modern ASCII terminals. The 2741s and such are all gone. Multics uses backslashes regularly. (2) It doesn't matter whether we do or don't implement rational numbers, or how we do it. This is a red herring. We can decide it later. The important thing is that we are gobbling a perfectly good character, slash, that better things can be done with. Because of issues such as you have raised, the Lisp Machine has not yet implemented rationals, and we certainly aren't going to rush out and put them in just because a new character got freed up in the readtable. (3) Howard, it should not be necessary to have a switch to revert / to run old code, since all old code calls "//", which will retain its old meaning. (I guess you DO have to recompile things in the new readtable.)  Date: 29 December 1980 17:17-EST From: George J. Carrette To: CSVAX.fateman at BERKELEY cc: DLW at MIT-MC, GJS at MIT-MC, HIC at MIT-MC, LISP-FORUM at MIT-MC, RWG at MIT-MC, RWK at MIT-MC Re: rationals as a Lisp data type Date: 24 Dec 1980 09:03:46-PST From: CSVAX.fateman at Berkeley I do not object to a data type which is expensive and useful. I suspect however, that rationals will be found to be less useful than, for example, bigfloats or what Collins calls "binary rationals" (power of 2 denominators), or perhaps IEEE standard floating point. I would put in a strong object to so called bigfloats. Most of the time I've seen them used by macsyma users to substitute numerical brute force for mathematical thoughtfulness. The better understood rational data type makes the classic numerical pitfalls obvious. Regardless, its clearly more fruitful to have methods in the lisp language (make that "lisp system") to add data types which are "on par" (in "efficiency" and error checking provided) with the built-in data types, then it is to add more basic types. Maybe it isn't clear, but the different ways of doing it are obviously a more interesting subject to argue on.  Date: 30 December 1980 1112-EST (Tuesday) From: Guy.Steele at CMU-10A Re: Quotation concerning rational arithmetic I stumbled across this familiar article again quite coincidentally, looking for something else: "Computer Symbolic Math and Education: A Radical Proposal" by David R. Stoutemeyer (U. Hawaii). Excerpts follow. All emphases (*'s indicate italics) are Stoutemeyer's. "Now for the most damnatory indictments of commonly-taught languages: 10. *The numbers and their arithmetic do not correspond to those generally taught in schools!* 11. *The numbers and their arithmetic do not correspond to those used in everyday life!* "The limited-precision integer arithmetic of these languages is bad enough in these regards, even without its usual overflow asymmetry induced by 2's complement arithmetic. For the floating-point arithmetic of these languages, we can add the indictment: 12. *Few other than the very best numerical analysts fully understand the implications!* "In contrast, the arithmetic that students learn in elementary school is: 1. indefinite-precision rational arithmetic. 2. rounded and exact indefinite-precision decimal-fraction arithmetic. "True, in high-school chemistry or physics students may learn scientific notation, which could be regarded as indefinite-magnitude, arbitrary-but-fixed-precision, rounded-decimal arithmetic. In contrast, the floating-point arithmetic of commonly-taught languages is finite magnitude, with only 1 to 3 alternative precisions, usually chopped nondecimal. All of these internal differences fom true scientific notation have external manifestations which are baffling to most people. ... "Admittedly, extended sequences of indefinite-precision arithmetic operations on experimental data suffers an unjustifiable growth in digits, but to that one can respond: 1. Render onto floating-point arithmetic that which one must. 2. Render onto more rational arithmetic all that one can. "Perhaps if floating-point computation becomes a choice rather than an imposition, users will regard floating-point with more of the caution it deserves. "Perhaps indefinite-precision rational and decimal-fraction arithmetic are inevitably less efficient than their respective finite-precision floating-point and integer counterparts. However: 1. The difference in efficiency would greatly decrease if the indefinite-precision arithmetics were microcoded or hardwired as are their finite-precision counterparts in most computers. 2. ... [remark on "floating-slash arithmetic"] 3. Even if the indefinite-precision arithmetic is substantially slower, computing has become so inexpensive that for the computational needs of most people, *the cost of indefinite- precision computation is negligible compared to the labor of assessing results done in an unnatural arithmetic*. "How negligible do computing costs have to become before software and hardware designers abandon this historical obsession with efficiency? [It'll never happen! But anyway... --GLS] If a certain computation costs 10 times as much in rational arithmetic as in floating-point, and the latter method was deemed worthwhile a few years ago when computer costs were more than 10 times as much, is it not worthwhile now to do the computation in a more humane arithmetic? "In the early days of computers, scientific computation was usually done using binary fixed-point fractions having magnitudes restricted to lie between 0 and 1. The wide-spread acceptance of floating-point brought substantially greater convenience for a large loss in efficiency. For most work, computer costs have now decreased enough to justify another such step in favor of human understanding. Those who cling to efficiency-worship should defend fixed-point fractions rather than floating-point." So how about it, fixed-point fans??  Date: 25 DEC 1980 1716-EST From: GJS at MIT-AI (Gerald Jay Sussman) Subject: Once again a plea for slash To: Guy.Steele at CMU-10A I certainly agree with you about slash. I also like rational arithmetic.  Date: 31 Dec 1980 11:52 PST From: Deutsch at PARC-MAXC To: LRG^ at PARC-MAXC, Goldstein at PARC-MAXC, Bobrow at PARC-MAXC cc: LISP-FORUM at MIT-MC Re: Division in Smalltalk Well, it looks like we've finally come to a decision about the division and remainder (modulus) operators in Smalltalk-80, so here it is for everyone to start getting used to it. / means "correct division". If you divide two integers and the remainder wouldn't be zero, / returns a Fraction (rational representation in lowest terms). If you divide two exact numbers (integers or Fractions), the result loses no information: it is a Fraction if (and only if) the quotient isn't integral. If you do a division involving an inexact number (Float), the result is always a Float. // means "division with truncation towards minus infinity". The result is always an integer, regardless of what you started with. a // b is equivalent to (a / b) floor. \\ means "remainder with truncation towards minus infinity". a \\ b is equivalent to a - ((a // b) * b). This operation is sometimes called "modulus": it is, for example, an appropriate operation for reducing hash values modulo a table size. quo: means "division with truncation towards zero". The result is always an integer. a quo: b is equivalent to (a / b) truncate. rem: means "remainder with truncation towards zero", i.e. a rem: b is equivalent to a - ((a quo: b) * b). This arrangement was worked out in a long and often frustrating process that involved input from a lot of people. Steve Putz is busily implementing it at this very moment.  Date: 7 January 1981 22:00-EST From: George J. Carrette To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Quotation concerning rational arithmetic Stoutemeyer left out two very important classes of arithmetic which was taught in elementary school (I go back to the old days when the new-math was the "in thing", would have been great if the teachers understood it of course), these are: (1) integer arithmetic modulo N. (2) Boolean arithmetic, including those "sets" and "ven diagrams" with apples and peaches. (Finite sets of course). With "clock arithmetic" as they called it, you can have a nice consistency in the language specification along with a naturalness and closeness to "hardware".  Date: 8 January 1981 1131-EST (Thursday) From: Guy.Steele at CMU-10A To: George J. Carrette cc: lisp-forum at MIT-MC Re: Quotation concerning rational arithmetic In reply to GJC's message of 7 Jan: True, Stoutemeyer did not mention those (modular and Boolean arithmetic); but on the other hand, the New Math has been for the most part a resounding failure, I suspect because it was poorly organized and because it really is more useful to most people just to be able to add than to understand why it works. New Math tried to produce a generation of mathematicians instead of (human) computers; the trouble is, the world needs computers. Imagine the disaster if we had thousands of hydraulic physicists and no plumbers. End of tirade about New Math. (Feel free to disagree. But I feel lucky to have barely missed it. My brother got New Math, and it was awful.) As for accommodating these arithmetics in LISP: (2) Boolean arithmetic is provided by the LOGxxx functions, on either single bits or on sets of unbounded size (one may use signed integers as sets over a countably infinite universe provided that either the set is finite or its complement is finite). (1) Modular arithmetic would be nice -- some algorithms can be more clever if it is known that the result need only be mod some number. *But* it is nice to be able to pick your own modulus, rather than having some single arbitrary power-of-two imposed! --Guy  Date: 8 January 1981 11:49-EST From: George J. Carrette To: guy.steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: modular arith. Certainly having a fixed modulus imposed by the hardware is pretty poor. If the modulus is not a prime it doesn't even help much with the "/" problem. So, what one would want is a Type of number, which was modulo P. You can do a lot of calculations with those numbers, and for small modulus they needn't even be consed. I would even go so far as to suggest the obvious, encoding the characteristic of the field with the fixnum. It wouldn't take much space, could be made to cost only a bit for everyday fixnums (characteristic 0), and would be quite nice for doing algebra of all kinds. -gjc Date: 27 Feb 1981 (Friday) 1833-EDT From: SHRAGE at WHARTON-10 ( Jeffrey Shrager) Re: The utility of readmacros in converting infix expressions. I was hacking Franz yesterday and discovered several interesting things about LISP in general; Firstly, the purpose that I hoped to acheive was to define some major subset of the APL syntax in LISP. The reason for such a game is neither here nor there. I found an interesting application, albeit somewhat special to the task at hand, for the upper/lower distinction that is intrinsic in Franz (care of Unix). The Concept (and most key-paired APL terminals) have the special APL functions in the shift position of the normal characters. For example, cap-I yields IOTA, cap-R yields RHO, etc. I simply defined readmacros that transformed cap-I into (iota ...) and similar macros for most cap letters. Thus it was very simple to create almost all the simple (non-overstrike) APL characters without interfering at all with normal variable names. In fact, the result of this was that I was able to basically type APL expressions directly to Franz and have them Evaled as if it were APL by simply setting my terminal into APL mode. A problem that I encountered is that there should really be better read control in NORMAL LISP (ie, not someone's hacked version). It is quite a problem to try and macro-hook infix operations because the leftpart of the dyadic function has been eaten by the reader by the time the readmacro is invoked. Typically this is resolved by simply writing your own reader and parser. This would not be necessary if the macro could get hold of the reader pointers and address the buffer (justread and to-be- read) like an array. -- Jeff  Date: 27 February 1981 21:17-EST From: George J. Carrette To: SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC Re: The utility of readmacros in converting infix expressions. On the lisp machine READMACROs get passed the list being constructed, so it is possible to hack simple infix syntax via readmacros. -gjc  Date: 2 March 1981 11:58 est From: HGBaker.Symbolics at MIT-Multics Re: shrage's APL parsing problems could be handled quite easily with Vaughan's CGOL parser. ~Date: 15 Mar 1981 (Sunday) 1605-EDT From: SHRAGE at WHARTON-10 ( Jeffrey Shrager) Re: LISP security query Is anyone doing work on securing LISP packages from hackery. That is, suppose I wanted to write a function (system of functions) that would perform some file access or another and did not want the password to the file accessible. I could perform various encryptions of the password so that it was not availble by reading my functions but I would prefer to be able to simply write a function that could not be opened for editing (grinding, etc) and that had some way of verifying that the functions that it called had not been tampered with. The latter is an obviously necessary requirement:the hacker could simply replace OPEN with a function that printed out the password. This is a fully non- trivial problem, especially for LISP since it is so easy to "fix" any function in the environment. -- Jeff  Date: 15 March 1981 1629-est From: Barry Margolin To: SHRAGE at Wharton-10 cc: lisp-forum at MIT-MC Re: LISP security query I suppose one way to keep the insides of a program fairly secret is to compile the function. Unfortunately, I don't know of ANY language which handles security IN the language. Usually, security is handled in the operating system, i.e. the Multics system has a very robust system of Access Control Lists, which specify who can access a file. In such a system, the language has very little to do with the security. As for your p[roblem about passwords, the whole idea of passwords is that the person running the program should have to type the password in. If you put the password in a program, in ANY language, you are compromising it and the security it tries to maintain, since anybody who uses your program has access to the file. Inn addition, in any languager, someone can look at the source and figure out what the password is. I don't think this is a LISP problem. barmar  Date: 15 March 1981 17:06-EST From: George J. Carrette To: SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC Re: LISP security query LISP could be fine for a secure system. For example, I can just imagine on the lisp machine a situation where if I typed the symbol SI:OPEN, the reader would say, "Sorry bud, you don't have the security clearance to look at the symbol." Imagine having to give a password to PACKAGE-GOTO. Restricting READER is only one way to get security. It could be quite elegant to use Lexical Closures to make secure functions. If you don't want a LOSER/HACKER to redefine stuff in your environment on you, well, you simply close over the stuff in the environment which is sacred. -gjc  Date: 15 March 1981 20:15-EST From: David A. Moon Re: lisp security Will you flamers PLEASE stop wasting everyone else's time?  Date: 16 Mar 1981 09:42 PST From: Deutsch at PARC-MAXC To: MOON at MIT-MC (David A. Moon) cc: LISP-FORUM at MIT-MC Re: lisp security Amen. We considered the Lisp security question at some length here at PARC (in trying to do an airtight "virtual machine" implementation) and concluded that yes, it was technically feasible to make a Lisp system as secure as the operating system it was embedded in, and no, it wasn't worth doing, because the cost in inconvenience in an environment where the whole point was experimentation far outweighed the degree to which we wanted to protect a hacker from bringing his personal machine down in flames. If we are talking about a world in which the operating system itself is distributed among the various individual computers, the considerations become quite different. But we aren't. Date: 11 OCT 1980 1626-EDT From: BAK at MIT-AI (William A. Kornfeld) Re: A (possibly interesting) macro Occasionally I find myself trying to read nested functions in the "wrong direction." For example, a form such as: (BAR (PLUS (FOO (REVERSE X)) 1)) I might understand as "first reverse x, apply FOO to that list, add 1 to the result and call BAR on that." This is precisely backwards. For cases where this kind of thinking seems more natural, the SEQ macro allows you to write: (SEQ (REVERSE X) (FOO *) (PLUS * 1) (BAR *)) which expands into the above code. Its hard to characterize the places where this kind of SEQuential, non-functional thinking is appropriate, but there definitely seem to be some. Note that any middle ground between totally SEQuential and totally functional is possible. We could have described the above as: (SEQ (FOO (REVERSE X)) (BAR (PLUS * 1))) or even as: (SEQ (BAR (PLUS (FOO (REVERSE X)) 1))) [!] A definition of SEQ is: (defmacro seq (&rest forms &aux replacement) (mapc #'(lambda (form) (setq replacement (cons (car form) (subst replacement '* (cdr form))))) forms) replacement)  Date: 11 October 1980 18:10-EDT From: George J. Carrette To: BAK at MIT-AI cc: LISP-FORUM at MIT-AI Re: A (possibly interesting) macro Date: 11 OCT 1980 1626-EDT From: BAK at MIT-AI (William A. Kornfeld) A definition of SEQ is: (defmacro seq (&rest forms &aux replacement) (mapc #'(lambda (form) (setq replacement (cons (car form) (subst replacement '* (cdr form))))) forms) replacement) [1] SUBST doesn't know anything about the semantics of lisp. So you should say (meta-eval form '(*) (list replacement)). [One such meta-evaluator is MC:LIBMAX;META >] [2] (SEQ (FOO) (BAR * *)) causes multiple evaluation of (FOO). Its not really clear that thats what is intended, because SEQ looks like it might be implemented as (defmacro SEQ (&REST FORMS) `(prog (*) ,@(mapcar #'(lambda (u) `(setq * ,u)) forms) (return *))) Which I think is much more natural for the type of programming style. I think "*" in SEQ should act just like a variable, especially since it looks so much like the "*" in the read-eval-print-loop.  Date: 11 October 1980 18:24-EDT From: Kent M. Pitman To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: SEQ I agree completely with GJC's observations about SUBST. Some sort of meta-evaluation or a strict variable-like interpretation seem to me the only ways to implement it without producing rashes of unexpected effects. May I suggest rather than PROG, though, that LET* is ideally suited for this? eg, (SEQ (REVERSE X) (FOO *) (PLUS * 1)) => (LET* ((* (REVERSE X)) (* (FOO *)) (* (PLUS * 1))) *) Isn't that much prettier? -kmp  Date: 11 October 1980 18:36-EDT From: Kent M. Pitman To: BAK at MIT-MC, LISP-FORUM at MIT-MC Re: Why not SUBST eg, consider what (SEQ (FOO) (SEQ (BAR) (BAZ *)) (GUNK *)) would do! -kmp 6Date: 14 October 1980 18:20-EDT From: George J. Carrette To: BUG-LISP at MIT-MC cc: LISP-FORUM at MIT-MC (setq Y '(1 2 3)) (setf (setq x (CAR y)) 100) ; Obscure format - SETF I don't want to give anyone the idea that I sit around thinking up these wierd things, this was created by a macro. (setf (progn (hprint "Howdy hacker") x) 99) seems to work. So I would think the same value-propagation code should work for SETQ. (****) (****) (Sort-of-value). What do people think about SETF through COND, DO, etc. Obviously these have to be dealt with if an RMS-style destructuring binder is going to be all it could be. -gjc  Date: 14 October 1980 22:27-EDT From: Alan Bawden Re: (setf (setq x (car y)) z) The reason I wanted (setf (progn x) y) to work is not because of any desire for generality, but because any macro that want's to return a symbol and calls the function displace actually returns (progn ), so it is sometimes necessary for setf to understand a singleton progn. The extension to longer progns wasn't part of what I was after when I suggested it. Gee whiz, friends! How about: (setf (if (random-test x) (car x) (cdr x)) (compute)) And imagine the hair in figuring out: (setf (prog1 (caar x) (rplaca x new-car)) foo) And then there is: (setf (or x y) z) and (setf (and x y) z) Heck, what about: (setf (setf ... Imagine what (setf (do ...) x) has to go through to come up with the correct code! Here is an almost endless hole that we can sink our efferts into!! I can't imagine that I would ever want to explicitly write any of these, and I find it hard to believe that I would expect setf to understand any macro that expanded into a setq. Date: 5 Feb 1981 (Thursday) 1838-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) Re: Language design online It seems that there is noone really responsible for the collection and distribution of "accepted" changes (such as the recent /\ change) from this forum. I think that this is a major problem both for the integrity of the language in general and especially for LISP users not connected to ARPA -- what happens to them when they get a package from MIT with shlases inteads of slashes?! Shouldn't there be a LISP standards group (I hesitate to use the term "committee") with some reasonably sane filing system in order to hash thru all the (perfectly reasonable) flaming that goes on here? Additionally, there are more lisp users off ARPA than on it. Souldn't they have some input? Perhaps this could be facilitated somewhat by the organization of a SIGLISP (since SIGPLAN has become SIGADA and/or SIGXconsideredY recently). I think that online flaming is certainly of use but that someone someplace should be appointed to actually centralize and standardize the language. I flamed at length a while back about how APL has benefited from such a centralized language definition and careful talk [a lot of APL discussion is also flaming but someone, someplace looks it all over -- used to be IBM, not it's STSC I suppose]. I won't flame about that again but I feel that this is an important issue that is very basic to the success of LISP in the real world. There have been articles recently proclaiming LISP as the language of the future (about which I will withhold judgement). Now that the LMs are commecially available this issue is much more important... the business world would never sit still for the kind of changing around that goes on here -- note the recent reaction to the Cobol80 standard suggestion. That's going to go down! I would like to see a LISP SIG (not just all of us) begun and some type of standards organization. -- Jeff  Date: 6 FEB 1981 1438-EST From: JL at MIT-MC (Jonathan Lettvin) To: SHRAGE at WHARTON-10 cc: lisp-forum at MIT-AI Re: Language design online I fully concur with the statement that the decisions on LISP or MACLISP development should be centralized. The only reason I would dare to speak up (seeing that I am an extremely prolific, yet novice, LISP programmer). Is that having very divided DEFSTRUCTs and STRINGs from implementation to implementation as well as randomly distributed or non-existent documentation for installed features makes it impossible for me to use good judgement in either learning or code development. As I am intending to install hairy features at distant sites I need to have ready access to known system inadequacies of PDP-10s ITSs Multices et. al. A for instance. Multics string handling is non-standard in that caseness is handled. For my purpose, the string handling packages MUST be able to separate cases. The functions of the same name (i.e. SUBSTR INDEX etc...) do NOT do the same thing at MC. Furthermore, MC does handle these, but to my knowledge, in no way does it make their existence public and the documentation is hidden in some obscure file in the form of the original source. Back to the point. I agree that having development from several varying sources (i.e. DEFSTRUCT at LM and MACLISP) is terribly valuable. Just that, once one has been shown to be more popular and accepted, it should be installed in a constantly updated on-line manual, indexed, cross-referenced, and all the good things that would make MACLISP outstanding. I already think that LISP is God's gift to the world. Now let's make it available. I volunteer to help or even learn enough to direct (although I know there are far more talented people available) any effort to do so. I even offer to become a part member of MathLab or AI or whatever for that purpose. Please though, If anyone like KMP or JONL or SOLEY or JIM or ...... can manage to fiddle up the time, they are eminently more suited than I.  Date: 7 February 1981 04:24-EST From: Kent M. Pitman To: SHRAGE at WHARTON-10 cc: LISP-FORUM at MIT-MC I think that discussion of standardization of Lisp at this time would be a waste of time. * Mistakes will always be made in language design. The worst thing you can do is to tie yourself to your mistakes by adopting a standard for something you didn't even mean to have done. * Things may not always go the way we like. I didn't like &keywords going in, but that's life. I'm sure there were those who still mourn the introduction of hunks into Maclisp. That's the way things go. Things may go down the wrong path, but I would prefer to see that happen than to have us stand our ground thinking we have ``the way'' when we know darned well we do not. Some time maybe 5 or 10 years from now, maybe more, when we have some perspective on the language if it still exists, then perhaps we can look back and decide on a standard. I don't think we're at the stage where we have the wisdom to make a useful standard. * While it is regretable that some users find that certain Lisp changes break their code, they must remember that they always have the option of freezing an old copy of Lisp and refusing to use the newer versions. Users of Fortran 2, Fortran 3, etc. did this sort of thing. Major changes occurred in the language between these versions and code that ran in one didn't always run in the other. So Maclisp 1914 and Maclisp 1997 share this same feature. The difference is perhaps that the evolution is faster, but there is still nothing stopping anyone from simply continuing to use Maclisp 1914 until hell freezes over if he doesn't want to accept the supposed benefits of the newer releases. * Besides, computer standards don't really accomplish that much anyway. They tend to try to be weak enough that all dialects can hope to conform. Since the dialects that try to conform are basically incompatible anyway, the standard serves litttle value. It may tell you when you exit the bounds of Lisp and enter that of Algol, but it won't tell you that a program written for one Lisp dialect will run in another.  Date: 7 Feb 1981 (Saturday) 2015-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) To: kmp at MIT-MC cc: lisp-forum at MIT-MC Re: Standardization I think that you have the priorities confused somewhat. It is not the outside users that have to "freeze their version at a lower level", rather if we expect to see any of the changes that we are talking about ever leave the realm of the ARPAnet and enter the universe of LISP as a whole then we had better have some general method of distributing knowledge throughout the AI (et al) community. If there are 100,000 CADRs in the field (God forbid) and we suddenly decide to change the syntax of DEFUN I think that we will be laughed at -- or worse, ignored. This is not so much of a fantasy; there are thousands of LISP systems in the world -- 99% of which never see these conversations. Is it those systems that are now "out of date" or are we talking through our hats. They are not ALL going to change slash to shlas unless we make it a matter of public policy. That doesn't mean that becuase some hacker on LISP-FORUM said that it was a good idea. Having an official info-center is a very important matter. I am not about to propose, as you thought, that we lock the language at version X and be done with it -- that would be admittedly silly at this time, however, we should have some means of communicating with the LISP world as well as centrally cataloguing what we do. A journal or newletter or whatever is not an unreasonable way of doing this albeit a lot of work is involved. One might argue that "well, they should join the net." I think that this is more of a fantasy than the 100,000 CADRs. Also, that does not help us in the control of the dynamic standard. A thought -- why should we be tied down to the "old" methods of standardization -- a "dynamic standard" is actually feasible in LISP as in no other [common] language. What happens to user X's lisp system when he gets onto the net and finds out that the only way that he has of running anyone else's code is to read through years of LISP-FORUM mail and incorporate in his system all the changes chronologically? -- Jeff P.S. I directly challange your statement that "computer language standards are weak...and tell you no more than when you leave ALGOL and enter LISP." I think that the later is one of the few things that they DO NOT tell you and I hold the APL standard as direct evidence against the rest of that paragraph.  Date: 8 February 1981 02:13-EST From: Kent M. Pitman To: "(FILE [LSPMAI;LISP FORUM])" at MIT-MC Re: [This reply went to SHRAGE@WHARTON-10] We are research groups and the Lisp dialects we use are a research vehicle toward other goals -- they are not the goals in and of themselves. As such, they must be something which can adapt to fit our needs, not something which we have bound ourselves to and must bend our programs to match. That our Lisp dialects are distributed to the outside world is a spin-off. We don't market Lisp. If/when Lisp is marketed, the people who do it can worry about standardization. We have few enough people being paid little enough to get as much as possible done in as short a time possible; those people need to spend that time worrying about more important things than adherence to standards. I believe your commentary is in part motivated by the scarcity of documentation on Maclisp and LispM lisp. Reasonable documentation will help significantly. We are working to produce some. I have neither the time nor the interest to do more than that. -kmp  Date: 8 February 1981 11:16-EST From: George J. Carrette To: SHRAGE at WHARTON-10 cc: KMP at MIT-MC, LISP-FORUM at MIT-MC Re: Standardization But: There is the LSB aka GSB method, which is rather unique to lisp and very interesting in itself. That is: (he can say it better of course) considering the various features such DEFUN, to be low level primitives with querks which the local system hackers seem to like. As long as a system supports arbitrary compile-time evaluation and a user-extensible compiler, [name me a couple APL systems which have that], then one can do things "his own super-winning standardized way." In that sense: "conventional computer language standards" are weak, because they are informal. A lisp language standard could be a formal (i.e. written in lisp), imbedding of a recommended programming language, in the various lisp systems available. I would call this the strong case for lisp standardization. It is in fact the usual way of doing things for transportable systems such as Macsyma. Much of what gets said in LISP-FORUM really has to be taken in light of this. When FOO acts like he is going to *die* if feature X isn't changed, he is probably just being over dramatic. I hope this shows that if you are interested in lisp standardization, you can really do it. -gjc  Date: 8 February 1981 21:23-EST From: Kent M. Pitman To: "(FILE [LSPMAI;LISP FORUM])" at MIT-MC Re: [SHRAGE: Standardization] Date: 8 Feb 1981 (Sunday) 1158-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) To: kmp Re: Standardization I can understand your position... ie, a research group, and know that you do not have the time to work on such standardization. I was not implying that you personally should do the job. I was simply replying to your most recent note. Anyhow -- yes, LISP is typically used as a research tool and, yes, documentation on the lisps that exist would be a good step. Note that being a research group you will eventually, hopefully, produce a product for whoever and if it is a good one then others might want to run it but be restricted by the particular LISP dialect that it runs under. May all your research be fruitful. -- Jeff  Date: 9 Feb 1981 09:03 PST From: Deutsch at PARC-MAXC Re: Standardization Several assertions that have been made about Lisp standardization contradict my experience with standardization and/or Lisp. Re KMP: yes, it's true things don't always go the way a particular individual wants. But balance some amount of unhappiness / inconvenience for some number of individuals against the incredible amount of work required to keep one's programs running in a "moving target" and/or do lots of niggling little edits (some of which may not be mechanically detectable, e.g. if the semantics of some existing function change in some subtle way). I've seen this in Interlisp -- hardly any program of mine survives more than a year or two without having to be edited. I would expect things to be at least as bad in the more volatile MIT environment. The option of "freezing a system" is hardly ever realistic. New packages get written and people want to use them. The outer environment (operating system, etc.) changes; the language system has to track them, and nobody wants to fix old versions. Bugs get discovered and fixed, but not retroactively. I've seen this phenomenon in every language system people have tried to fork off a frozen version of. Computer standards can accomplish a lot if enforced. APL is a good example. Algol is another. Cobol is yet another. Re GJC: unfortunately, everyone doing things "his own super-winning standardized way" doesn't work either. For several years I did my Interlisp programming in a system which I had done modest extensions and adaptations to. I finally had to give it up, because none of the programs I wrote could be combined with programs written by anyone else (because they had all extended and/or adapted the system in their own ways, some of which were incompatible with mine). Arbitrary compile-time evaluation and user-extensible compilers make this problem much, much worse. I personally favor an approach similar to that of the Utah group: standardize a modest kernel of Lisp which all systems agree to implement compatibly. This allows people to write transportable programs if they want to, and to balance the cost of transportability (both between systems, and into future versions of their own system) against whatever gains they get from using non-standardized features. As we gain more experience, the size of the standardized kernel can grow.  Date: 9 Feb 1981 (Monday) 1702-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) Okay -- so how many'y'all are game to go for the SIGLISP "modest kernel" standards committee? We can talk about methods when we see whether we're dealing with 10 or 1000 members.  Date: 9 Feb 1981 15:24 PST From: Masinter at PARC-MAXC To: Lisp-Forum at MIT-MC, "@LispDiscussion" at PARC-MAXC cc: Lisp-Discussion at MIT-MC Re: Lisp Compatibility, Lisp differences INVITATION TO JOIN LISP-DISCUSSION: In June, a group of us (namely those currently on Lisp-Discussion@MIT-MC) got together and generated an outline of what we believed to be the major areas of divergence among the popular lisp dialects of the ARPA community. I am interested in turning this outline into a publication, because I think that currently there are more differences than most folks suspect. This of course would be useful to anyone seriously considering Lisp "standardization", because every area of divergence would necessarily have to be addressed in any proposed standard. I don't know enough about all of the dialects of Lisp to finish the document myself. Since there seems to be some recent interest on Lisp-Forum@MIT-MC on standardization, I invite those interested to join Lisp-Discussion@MIT-MC. My plan is to send out questions and edit together the responses. I'll send future messages only to Lisp-Discussion@MIT-MC. -------- SOME GROUND-RULES: There are several ways to discuss differences between dialects, easiest to characterize by the kinds of questions you wish to address. For example, one might wish to (1) take an implementation of dialect A and turn it into an implementation of dialect B on the same machine. (2) write a "compatibility package" which will sit on top of any implementation of dialect A and turn it into dialect B (this is different from (1) because in this situation, the implementation of A is hidden) (3) write an automatic translator of dialect A to dialect B, which will handle most cases and accurately (100%) flag any areas which might require hand-translation (4) same as (3), but the translator need only work for "most" programs (5) translate a single (perhaps large) program from dialect A to dialect B (6) switch from programming in dialect A to programming in dialect B ----------- It seems important to concentrate on "low-level" differences first, in that things like the nature of the debugger, and the kind of window system you use, while most important to those in situation (6), are in some sense "external". Thus, I'd like to leave out, at least for now, discussions of facilities which reside in the "library" of one dialect but could in theory be written in any other dialect. Also, I would like to concentrate on descriptions of differences, leaving opinions about what "should" be done to some other document. The dialects being considered (please be sure to join in if you know of others which should be included) are: Interlisp (Interlisp-10 and Interlisp-D) MacLisp (implementations on 10s and Multics) LispMachine Lisp Standard Lisp (multiple implementations) NIL (VAX) Franz (VAX) UCI-Lisp (10s) Lisp/370 (370) Lisp 1.6 (10s) Major areas of divergence: Data structures Evaluation Control structures input/output other Example: CAR/CDR of non-lists: While Lisp systems are consistent in the interpretation of CAR and CDR as applied to objects returned by CONS, there is divergence in the interpretation of those operations when applied to other data objects. In general, the implementations say that the result is "undefined", but there are important exceptions: Interlisp and MacLisp define CAR/CDR of NIL to be NIL. Otherwise, result is undefined. Interlisp-D has switch which causes CAR/CDR of other data to cause an error. Lisp Machine Lisp invokes the message passing mechanism, which ...? NIL distinguishes between the empty list () and the atom NIL. CAR/CDR of NIL is (an error/undefined) other dialects ...? Date: 16 October 1980 15:02-EDT From: Thomas A. Boyle To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC The famouse SINElanguage for writing real time editors is lisp like and was popular to many lisp people and now OTA has it online in case anyone is interested. It was originally implemented on the Architecture Machines under MagicSix (TM.) Here is the message he sent me, Neal.Feinberg at CMU-10A is going to try to bring it up over there maybe. lisp-forum Date: 15 Oct 1980 2140-PDT From: Ted Anderson I have moved the files Tom Boyle has mentioned to SAIL. You can FTP them from there without an account. The files are called sine.stf, sine1.stf, sine2.stf ... sine6.stf. That last one contains a summary of the system organization and should probably be looked at first. Let me know if you have any trouble getting the files or questions about the stuff once you get them. Yours, Ted 1Date: 20 December 1980 20:38-EST From: Kent M. Pitman Re: Consistency: #\... and (FORMAT s "~:C" c) Someone once proposed that #\ and/or #/ should do a vanilla read if followed by an alphabetic character. This would mean you could say #/ or #/ALTMODE to mean the same thing. This makes a lot of sense, I think, and I'm wondering why we never did it. We could retain #\ for compatibility, but encourage people to change to the more uniform scheme. I was just thinking about symmetry a bit ago and it seems to me that the appropriate symmetry is the following: (DO ((I 0. (1+ I))) ((= I #o200) "What a winning I\O scheme!") (IF (NOT (EQUAL I (READLIST (EXPLODEN (FORMAT () "#/~:C" I))))) (RETURN '"I\O isn't symmetric"))) Typing this into Maclisp (using #\ and an upper bound of 32.) dies immediately with a diagnostic of #\DOWNARROW not being defined. I think it would be well if #\ and FORMAT's ~:C option agreed on a set of names to be used. Anyone now doing something like (#/ABC) and expecting this to read as (#/A BC) is losing, so I think this is fully upward-compatible with the existing implementation. This isn't really a new proposal, just new motivation (a more coherent i/o philosophy) for an old proposal. Any comments? -kmp ps Apologies in advance to those LispM types who are offended by the 'obsolete' function EXPLODEN (is READLIST obsolete, too? how sad) in my example. That was the easiest way I could think of to describe the desired relation.  Date: 20 December 1980 2333-EST (Saturday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC, Scott.Fahlman at CMU-10A, GJS at MIT-AI, rwg at SU-AI, Fateman at Berkeley Re: Once again a plea for slash I am firmly convinced that the MacLISP-and-sons community would be much better off if we were to change the "slash" character to something other than "/". The sooner we do it, the better. (1) Sussman and others report that the single greatest stumbling block for novice LISP users trying to do numerical tasks is remembering to double the "/" for the division operator. This is the only operator of the four standard arithmetic operators that does not have the same name used in almost every other programming language ("+", "-", "*", "/"). (2) The "/" character is common in English text. It looks odd to have to double it in error messages, etc. Have you ever written (FORMAT T "~S got an I/O error." FOO) only to get the error message FROBBOZ got an IO error. ? On the other hand, doesn't (FORMAT T "~S got an I//O error." FOO) look awfully silly? (I observe that in a recent note KMP instead wrote "I\O" for "I/O" to avoid this difficulty. I suspect this usage looks strange to all except users of APL\360. (3) I want to push for rational numbers as a standard part of LISP, supported along with bignums and floating point, with appropriate contagion rules and operations. An obvious syntax for a rational number -- the *only* obvious syntax -- is /, for example, 43/17. One might argue that this syntax could be accommodated as an exception anyway, but it would be an awfully kludgy exception. (4) At least one operating system, UNIX, uses the "/" character in pathnames. Other operating systems are likely to emulate this, given the popularity of UNIX plus the lower-caseness of "/"! For this reason, FRANZ LISP has used "\" as the quote character, so that one may use file names like "/usr/gls/win.lsp" rather than "//usr//gls//win.lsp". For all these reasons, I propose that we switch -- and soon -- to "\" as the standard character quoter. With the advent of " strings and | symbols, I think you may be surprised at how few / characters actually appear in code nowadays. This will ahve the following repercussions: (a) The name of the division function will be "/". (Actually, I further propose that "/" be the universal *correct* division operator, which floats (or uses rationals) if necessary to give a correct result. For example, (/ 7 2) => 3.5. Then let "//" be the one which produces integer results for integer inputs (or maybe always produces an integer result??). The name of remainder would change to "\\", and why don't we just flush integer GCD, as it is seldom used? (b) The #/ syntax would become #\. Following KMP's recent suggestion, #\ could take on the burden of both #/ and #\. (c) In other situations, / would become \ (for quoting characters). So how about it, people? Maybe it's not worth changing MacLISP, but I predict it would be a well worthwhile investment for LISPM, NIL, and SPICE LISP.  Date: 21 December 1980 14:46-EST From: John G. Aspinall Re: Once again a plea for slash I agree with Guy's suggestions for slash, except for one. Making / the general (ie type converting) operator would introduce *more* inconsistency. Let's keep plus, difference, product, and quotient as the general operators; +, -, *, / as the fixnum choices etc. After all, most numerical hackery involves being quite conscious of whether you've got fixed or floating point entities on your hands - eg zero checking. John.  Date: 22 Dec 1980 10:26 PST From: Deutsch at PARC-MAXC To: Guy.Steele at CMU-10A cc: lisp-forum at MIT-MC, Scott.Fahlman at CMU-10A, GJS at MIT-AI, rwg at SU-AI, Fateman at Berkeley Re: Once again a plea for slash 1) I agree with your sentiments about / vs. \ as the escape character. Interlisp has used % for a long time with no ill effects. Smalltalk is about to use $. 2) Hurrah for "correct division". Smalltalk is just about to bite the bullet and implement two division operations: / will produce an arithmetically correct result, an Integer if possible, a Rational (if both operands are Integers or Rationals) or Float if not. div: will produce an integer result truncated towards zero (or minus infinity, I forget which), regardless of the types of its operands.  Date: 23 December 1980 20:10-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: GJS at MIT-AI, rwg at SU-AI, Guy.Steele at CMU-10A, Scott.Fahlman at CMU-10A, Fateman at BERKELEY, lisp-forum at MIT-MC Re: Once again a plea for slash I, too, assign a high benefit to changing the quoting character to not be slash. The problem of blocking novice users bothers me a lot; having to double slashes in text bothers me a lot too; leaving room for rationals is a good thing, though, as we just pointed out to one of our users, we will probably never get to the point where implementing rational numbers is the most important thing to be done next; Unix pathnames could certainly be an issue for some sites in the future. However, there are some problems in your plan. (1) Leaving Maclisp alone and changing the LM and NIL would be a disaster for Macsyma. I suppose Macsyma could doctor the Maclisp readtable before doing anything else, assuming there is some way to make this work in the compiler. (2) Making the / function do what you say is very unappealing. I think you are throwing around the word "correct" rather loosely, especially for someone who has, in the past, been careful to distinguish between "floating point numbers" and "real numbers". What is the "correct" result of (/ 1 3)? I would prefer to let // retain its old meaning and have / be the round-towards-negative-infinity kind of integer division that Bill Gosper has more than adequately demonstrated is superior to the current thing ("FORTRAN divide"). The implementation strategy for the Lisp Machine would be as follows: To keep thing working as much and as long as possible, several techniques can be used. Leaving "//" as a function named "//" is a good example of this. Probably all occurances of the function named "/", currently called "//", can be left as calls to the new function named "//", and they will never have to be changed if we don't want to. Other than this case, we rarely use slashes in symbol names. More often, we use them in strings. Where they are used to slashify slashes, it would not hurt terribly to leave things alone; we would just get funny messages with doubled slashes every so often, which could be fixed any time. The problem is where we use them to slashify double-quotes in strings; these would have to be fixed. There are also a few cases where backslashes are used explicitly in strings, like user-defined format items, though these are pretty rare. I think the easiest thing would be to generate a transistion system version in which both slash and backslash worked as quoting characters, and then use this to convert everything, finally removing slash when everything is converted. No matter how we do it, old Maclisp programs that are not converted will not work after the change unless they are read in on a special readtable. We should make such reading easy to do; I think this is a price worth paying to get rid of making slash quote. The real problem, of course, is that this conversion is a pain and a bunch of work. Someone has to do it to all the software. Of course, we have some pretty powerful versions of Tags Query Replace on the LM these days...  Date: 24 December 1980 00:36-EST From: Howard I. Cannon Sender: HIC at CADR6 at MIT-MC To: LISP-FORUM at MIT-MC, dlw at MIT-AI, GJS at MIT-AI, rwg at SU-AI, Guy.Steele at CMU-10A, Scott.Fahlman at CMU-10A, Fateman at BERKELEY Re: Once again a plea for slash I agree that Slash should be flushed as the quoting character. / should be analagous to +, -, and *, whatever that means (like, in MacLISP, integers; in Lisp Machine, generic, etc...). However, there should, for a while at least, be a VERY EASY way of reverting to /'s old meaning in order to run old code in a pinch.  Date: 23 Dec 1980 21:55:37-PST From: CSVAX.fateman at Berkeley To: GJS at MIT-AI, HIC at MIT-MC, LISP-FORUM at MIT-MC, dlw at MIT-AI, rwg at SU-AI cc: fahlman at cmu-10a, guy.steele at cmu-10a Re: Once again a plea for slash some terminals don't have a back-slash. Does the multics typeball have it? (Of course I prefer back-slash for quoting a character as we have done in Franz.) What "/" should do is another matter. Adding exact rational numbers as a data type is not necessarily a good idea. unbounded precision is expensive if there is a long sequence of operations.  Date: 24 December 1980 01:16-EST From: Robert W. Kerns To: CSVAX.fateman at BERKELEY cc: GJS at MIT-MC, HIC at MIT-MC, LISP-FORUM at MIT-MC, DLW at MIT-MC, RWG at MIT-MC Re: Misc There is no such thing as a 'multics typeball'. Multics uses \ extensively as it's printed representation of non-printing characters. Maybe EBCIDC terminals don't have it, but then you have to map characters anyway. As for the partial-ascii TTY's; the few remain should be ignored. As to your objections to providing rationals because it can be too expensive in long operations, well, that's silly. We have BIGNUMs, don't we? Obviously, that's why we use a different operator; so we use rationals if that's what we want, or we write what we write now, and DON'T use them if that's what we want. Why prohibit a feature on the grounds that some people may find it too expensive? As long as I'm sending a message: I too would like to see '/' replaced.  Date: 24 December 1980 15:25-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: GJS at MIT-AI, rwg at SU-AI, CSVAX.fateman at BERKELEY, HIC at MIT-MC, LISP-FORUM at MIT-MC cc: fahlman at CMU-10A, guy.steele at CMU-10A Re: Slash: confusing the issue (1) All modern ASCII terminals have backslashes. Multics has changed since you left; it now uses modern ASCII terminals. The 2741s and such are all gone. Multics uses backslashes regularly. (2) It doesn't matter whether we do or don't implement rational numbers, or how we do it. This is a red herring. We can decide it later. The important thing is that we are gobbling a perfectly good character, slash, that better things can be done with. Because of issues such as you have raised, the Lisp Machine has not yet implemented rationals, and we certainly aren't going to rush out and put them in just because a new character got freed up in the readtable. (3) Howard, it should not be necessary to have a switch to revert / to run old code, since all old code calls "//", which will retain its old meaning. (I guess you DO have to recompile things in the new readtable.)  Date: 24 December 1980 16:52-EST From: Howard I. Cannon Sender: HIC at CADR9 at MIT-MC To: dlw at MIT-AI cc: HIC at MIT-MC, LISP-FORUM at MIT-MC, GJS at MIT-AI, rwg at SU-AI, fahlman at CMU-10A, guy.steele at CMU-10A, CSVAX.fateman at BERKELEY Re: Slash: confusing the issue Date: 24 December 1980 15:25-EST From: Daniel L. Weinreb (3) Howard, it should not be necessary to have a switch to revert / to run old code, since all old code calls "//", which will retain its old meaning. (I guess you DO have to recompile things in the new readtable.) If I have code that reads: "/"foobar /" he said", then even though the // function exists, this code won't work. In other words, what I meant was that I might have interpreted code that had lots of slashes...this suggestion was just the obvious thing: that in the first system with / turned off, there be a function to turn it on.  Date: 24 Dec 1980 (Wednesday) 2036-EDT From: SHRAGU at WHARTON-10 (Jeffrey Shrager) Re: An amusing sidenote to the kwote controversy: According to lore (passed to me by Dick Wexelblat of Sperry Univac) an alternate pronounciation of "\" is "shlas".  Date: 29 December 1980 1633-EST (Monday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb cc: lisp-forum at MIT-MC Re: Once again a plea for slash Indeed, I should apologize, and I do, for using the word "correct" too loosely. Let me instead simply say that there is currently in MacLISP and sons *no* standard division operator that even closely approximates (// x y) = z implies (* y z) = x, because if x and y are integers z can differ from the mathematically accurate result by as much as (x/y)-floor(x/y), which can be made arbitrarily close to 1. For small x and y the relative error can be very large. Worse yet, the expression (* y (// x y)) can have arbitrarily large relative error because the error can be as much as y-1. Floating-point numbers at least guarantee a reasonably small relative error (for (* y (// x y)) no more than a couple of LSB's). Instead of flaming about arcane numerical properties and algebraic identities, perhaps I should just point out that this is another of those glitches which can be a significant stumbling block for novices. Sussman defines his own division operator for students to use for *two* reasons: (1) so as not to have to double a slash; (2) to avoid behavior like 1/3 => 0.  Date: 21 Dec 1980 0808-MST From: Griss at UTAH-20 (Martin.Griss) Subject: Re: Once again a plea for slash To: Guy.Steele at CMU-10A cc: Griss at UTAH-20 In Standard LISP we use ! for the / character. The only mathematical function that gets upset is FACTORIAL, as far as we can see. MLG  Date: 30 December 1980 1222-EST (Tuesday) From: Guy.Steele at CMU-10A Re: Forwarded message and reply Date: 21 December 1980 14:46-EST From: Carl W. Hoffman Subject: Once again a plea for slash From: Guy.Steele at CMU-10A I am firmly convinced that the MacLISP-and-sons community would be much better off if we were to change the "slash" character to something other than "/". The sooner we do it, the better. I agree with everything you say in this proposal. I am even willing to convert Macsyma and help convert Multics Emacs, which would be two of the larger bodies of code affected, as well as all of my own programs. I would make one change, however. If we adopt \ as our quoting character, then I would make the "character reader" be #/. If we stick with / as the quoting character, then the character reader should be #\. My reason? I would like to define the quoting character (let's call it / for now) as follows: When a / is encountered, the following character is read and is treated as if it had the same syntax as an uppercase alphabetic. This is almost what the current definition is, with the exception of the treatment of #/. That is, I feel that #/A should be the same thing as #A. Take a look at the Lisp Machine reader. / has the definition I gave above. / is processed at a very low level. As a result, #/ is defined using a kludge. The regular expressions which recognize #q, #o, and #\ forms look something like # Q # O # \ This of course is highly schematic. However, the expression which recognizes #/ looks like # Do you see what I'm driving at? Slashification is done before the tokenizing FSM is run. The definition I would like to adopt would also simplify the coding of editors and other programs which parse Lisp code. Of course, the reason that #/ was used in the first place is that, while it makes it harder if you did indeed put /-handling at that low a level, it makes it much easier for very simple-minded parsers (like EMACS) to deal with the syntax. Maybe the current organization of the LISPM reader isn't appropriate to the task? The trouble is that it treats #/A as #[/A] rather than as [#/]A. The idea is that #/ is an odd kind of /, just as #' is an odd kind of '.  Date: 25 DEC 1980 1716-EST From: GJS at MIT-AI (Gerald Jay Sussman) Subject: Once again a plea for slash To: Guy.Steele at CMU-10A I certainly agree with you about slash. I also like rational arithmetic.  Date: 19 January 1981 2148-EST (Monday) From: Guy.Steele at CMU-10A Re: Proposed schedule for change from "/" to "\" So far I have heard *no* objections whatsoever to the proposal that backslash replace slash as the quoting character, and have received many messages supporting the idea. The LISP Machine people have shown great enthusiasm, and Spice LISP is definitely committed to using backslash no matter what other LISP systems do. There is already volunteer manpower to convert MACSYMA and Multics EMACS for this purpose. Therefore I propose the following schedule for conversion: (1) Announce what's going to happen by February 1. (2) On Saturday, March 14, change MacLISP and friends so that both "/" and "\" are quoting characters. Encourage people to use "|" where possible. (3) On Sunday, June 14 (Flag Day, of course), change again so that "/" is no longer a quoting character by default. (Of course, people can still change the readtable for old programs.) Questions: (a) Is this schedule all right by everyone concerned? (b) Does anyone think that operations (2) and (3) should be simultaneous rather than staggered? (c) Any other comments? --Guy  Date: 19 January 1981 22:21-EST From: Gail Zacharias To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Proposed schedule for change from "/" to "\" Would the MacLisp function \\ become \\\\ then? Gee...  Date: 19 January 1981 22:23-EST From: George J. Carrette To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Proposed schedule for change from "/" to "\" (1) The change should be done "all at once" (2) The system itself, in this case the READER should be able to make the conversion of a file, since *what* has to be done to convert a file can be quite easily expressed in the language. (Maybe that is *should* be easily expressable). (3) If each source has some associated date, which was the last "date" of "lisp" which the source was "run" in, then it would be a lot easier to keep joe-user happy when the language is changing out from under him for reasons he cannot understand. Joe-user usually thinks about himself and not the many programmers who will follow him. -gjc p.s. I've thought out how to build "2" into a lisp system if anybodys interested.  Date: 19 January 1981 22:29-EST From: Kent M. Pitman To: GZ at MIT-MC cc: LISP-FORUM at MIT-MC Re: \\\\ well, /// won't have been taken. Maybe we could move it there (heh,heh) ...  Date: 20 January 1981 01:34-EST From: Daniel L. Weinreb To: Guy.Steele at CMU-10A, lisp-forum at MIT-MC Re: Proposed schedule for change from "/" to "\" The schedule sounds good to me. Our experience with the Lisp Machine system has shown that this sort of staggered change is very helpful in making the switchover. In order to make it really work, what would happen is that during the transition period, we would change all our code so that (in addition to doing the obvious things), all slashes were quoted by backslashes. This will work during the interim period, and after the change is complete, we will merely have some quoted characters that do not really need to be quoted, which is OK. (We can take them out at our leisure, or just forget about them.)  Date: 20 January 1981 0952-EST (Tuesday) From: Guy.Steele at CMU-10A To: George J. Carrette cc: lisp-forum at MIT-MC Re: Proposed schedule for change from "/" to "\" In order to convert files quickly, maybe a simple EMACS command could be consed up to convert a file from "/" to "\".  Date: 20 January 1981 10:40-EST From: George J. Carrette To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: Proposed schedule for change from "/" to "\" Date: 20 January 1981 0952-EST (Tuesday) From: Guy.Steele at CMU-10A In order to convert files quickly, maybe a simple EMACS command could be consed up to convert a file from "/" to "\". Certainly. KMP may already have one, otherwise I doubt he would have offered to convert macsyma. The existence of such a command is the main argument for making the "\" <=> "/" switch all at once, because it is natural to fully convert a file you are working on, instead of going through a two-stage process (there are cases, albeit strange, where having both "\" and "/" quote will make it difficult to write something in such a way that it will mean the same thing when the quoteness of "/" is removed). The main thing is that you want to convert files as you work on them, and you want to be reminded to do the conversion on an unconverted file. -gjc  Date: 22 January 1981 14:36-EST From: Kent M. Pitman To: WJL at MIT-MC cc: LISP-FORUM at MIT-MC An Emacs macro to help make the change from / to \ will probably be provided. There will be ample time and I am sure that I or others can help you translate code if/when the "\" proposal is adopted. I wish you hadn't jumped the gun and sent such a message to *ITS. Changes so sweeping will be sent to *ITS for approval as well, as was the case with the *CATCH/CATCH proposal a while back, and I don't think it was fair of you to imply that such a change was going to be made without talking to other people. -kmp  Date: 01/23/81 03:36:10 From: Moon at MIT-AI Re: / and \ What should the \ function be renamed to? Is \\ reasonable? And should the present rem function (friend of remove, which is a friend of delete, which is a friend of member) be renamed to something else so that name can be used for some sort of remainder? I would like to make new functions, div and mod, which are like quotient and remainder but truncate toward negative infinity rather than zero (more precisely, mod has the sign of the divisor whereas remainder has the sign of the dividend, and div is the division function that goes with it). I guess it isn't necessary to worry about fixnum-only names for these, since computer manufacturers perversely don't provide the operation anyway. (If my documentation is up-to-date, the S-1 has an instruction for mod but doesn't have one for div! However, it has so many kinds of numbers that no naming scheme can possibly accomodate them all and declarations necessarily are required.) As far as I know, everyone agrees that the present \\ (fixnum gcd) is useless and can simply be flushed. Of course, in the Lisp machine this vast plethora of arithmetic functions are all identical, so I don't care as much as some people. But it would certainly be unacceptable to make this change in some parts of the Maclisp family and not others.  Date: 23 January 1981 03:40-EST From: Kent M. Pitman Re: / to \ I experimented with Teco macros for making the change earlier today. I have the skeleton of this stuff worked out, pending refinement if/when the details are decided upon.  Date: 23 January 1981 05:26-EST From: William G. Dubuque Sender: BIL at MIT-MC To: Moon at MIT-AI cc: LISP-FORUM at MIT-MC Re: ``\\ (fixnum gcd) is useless and can simply be flushed'' Maybe fixnum gcd is useless to a systems programmer, but any combinatorial or algebraic code relies on \\ enormously if efficiency is at issue (and in such applications it almost always is). Such an outright dismissal of the function is uncalled for.  Date: 23 JAN 1981 1252-EST From: DICK at MIT-AI (Richard C. Waters) I don'T see any advantage whatever to changing / to \. who cares? // looks dumb its true, but so what. I don't personally want to spend 10 minutes changeing my code on this account. There is no added functionality that I can see. Dick Waters  Date: 23 January 1981 22:04-EST From: Alan Bawden Sender: ALAN at MIT-MC Re: / => \ This is just a brief note to put ourselves on record as opposing the change from / to \. We're not really interested in flaming about it at great length, we just want to puncture the argument that goes "Well nobody seems to object...".  Date: 01/24/81 00:54:29 From: DLW at MIT-AI Re: / and \ I'd hate to rename REM. Given DELETE, DELQ, and DEL, ASSOC, ASSQ, and ASS, MEMBER, MEMQ, and MEM, and REMOVE and REMQ, it is hard to see how the missing function could not be called REM, unless either the whole REMOVE series gets a new name, or the whole funarg-accepting series is renamed with a new convention. I would not mind having a REMAINDER function.  Date: 24 January 1981 01:23-EST From: Earl A. Killian To: DLW at MIT-AI cc: LISP-FORUM at MIT-MC Re: / and \ Date: 01/24/81 00:54:29 From: DLW at MIT-AI I would not mind having a REMAINDER function. Don't you already?  Date: 24 January 1981 01:38-EST From: Earl A. Killian To: MOON at MIT-MC cc: LISP-FORUM at MIT-MC Re: / and \ Date: 01/23/81 03:36:10 From: Moon at MIT-AI What should the \ function be renamed to? Is \\ reasonable? Maybe "%" would be a better quote character, given the natural symmetry between / and \. Is this the Interlisp character? Oh shit, the LispM uses this all over for internal frobs, doesn't it? Of course, %% would look even more internal! I guess it isn't necessary to worry about fixnum-only names for these, since computer manufacturers perversely don't provide the operation anyway. ... As far as I know, everyone agrees that the present \\ (fixnum gcd) is useless and can simply be flushed. As to fixnum versions of things: if you want them, you might as well use Ixxx, where xxx is the generic name. E.g. IGCD. I don't see why every random arithmetic function needs a single character name (indeed there's MIN and MAX).  Date: 24 January 1981 05:43-EST From: Jon L White Re: FIXNUM-restricted versions of rational funs I'd like to say a word *for* names like IMIN, IMAX, and IGCD (or why not even MIN&, MAX& and GCD& for fixnum functions; and MIN$ and MAX$ for flonum functions?) Note: I'm not favoring the change of / to \, but merely admitting some alternate naems. If there are no objections, we should add some such names to the initial MacLISP list structure, even if they only default to being the same as MIN, MAX and \. Now a word of caution about names like "Ixxx". INTERLISP uses this style to mean "Arguments are to be 'integerized' and integer arithmetic is to be used" (since INTERLISP doesn't have 'bignums', the restriction to 'integers' is the same as the restriction to 'fixnums'). The problem with this is that each argument must still be run-time interpreted (i.e., decide if it is a number, check if it is floating, and convert to fixed if so, etc etc). In short, the compiler can't fully open-code it. Partly to avoid mnemonic conflict with the INTERLISP conventions, I'd not like to see IMIN, IMAX and IGCD used. ** I strongly reccommend ** that some series of names be provided merely to mean "2's complement, computer rational arithmetic". For just gazillions of applications, the substitution of these operations for integer operations is fully satisfactory. The same can be said for "2's complement, computer floating-point arithmetic". At one time, MacLISP tried to out-do APL, so many of these functions got 1- or 2-character names; but how about it, how about choosing the convention of suffixing a & to mean fixnum-only, and suffixing a $ to mean flonum-only? Even when there is no particular machine instruction generall provided (e.g., what machine has a GCD instruction??), this is still worthwhile as a succinct way to locally declare arithmetic modes.  Date: 24 January 1981 11:52-EST From: Jon L White Re: \ and \\ My previous note, mentioning names like IGCD and GCD&, really should have used IREMAINDER and REMAINDER& -- \ corresponds to 2's-complement REMAINDER, whereas \\ corresponds to 2's complement GCD.  Date: 24 JAN 1981 1547-EST From: HENRY at MIT-AI (Henry Lieberman) Re: / => \ I guess I have to say that I oppose the change as well. Reason: I don't see very much benefit to be gained, and there's no point in breaking old programs without a good reason.  Date: 24 January 1981 21:36-EST From: David C. Plummer Re: /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ My vote, for whatever it is worth, is to leave the status quo the way it is.  Date: 24 January 1981 22:47-EST From: Daniel L. Weinreb To: EAK at MIT-MC cc: LISP-FORUM at MIT-MC Re: / and \ Sorry about that; yes, there is already a REMAINDER function. Sorry about that. I would not mind typing it in instead of backslash, although REMAINDER in Maclisp would be generic rather than fixnum-only.  Date: 25 January 1981 03:18-EST From: Robert W. Kerns To: LISP-FORUM at MIT-MC cc: DICK at MIT-MC, WJL at MIT-MC Re: /\ vote My vote is to make the change, if we can persuade the LISP community at large that running a EMACS macro over their code once any time during a 3 month period won't disrupt their research too much. If you've ever tried to teach the language you can appreciate the confusion slash now causes new LISP students.  Date: 25 January 1981 03:30-EST From: Earl A. Killian Re: /\ For the LISPM at least, it would seem to me that it'd be winning to allow the user to specify the quote character in the -*- line, just like the base is specified there when nonstandard. People could then convert individual files at any time, and then update the -*- line. Some distant day in the future could change the default, and then people that didn't want to be bothered would simply have to put specify \ there. Unfortunately, I don't think this would help Maclisp users much.  Date: 25 Jan 1981 0515-EST From: Dave Andre To: LISP-FORUM at MIT-AI cc: DLA at MIT-EECS Re: /\ Mess. I vote for the change. "/" is just to common a character to be a quote character, and "\" really has no other use. In a winning file system, all files would have arbitrary properties, and for the transition, a new property which tells lisp readers of the file whether the format is "new" or "old" would be the answer. Currently file properties such as this are kept with the text, on the -*- line. This has the obvious disadvantage that, in order to change the file's properties, one has to change the text. However, since it is the only property facility that exists at the moment, it's probably the right thing to use. So I propose that, at some point in time, anything which uses lisp code check for a "Format: New" in the file's property list, and if it's not, offer to translate the file before "using" it. The problem with this scheme is that not all people use the -*- stuff at the beginning of a file, and some might want to flush it after it has been written out by the translator. As I see it, the only other viable option to determine the file's format is to use the file's date, as suggested before. The problem with this is that people may write code using the old quoting mechanism after the magic date. I see the problems with my scheme as easier to cope with. -- Dave  Date: 25 January 1981 10:01-EST From: George J. Carrette Re: /\ For maclisp users, an unwind-protected END-OF-FILE-EVAL-QUEUE would be sufficient to set up local modes of all kinds, since Emacs doesn't actually do a lisp read on the file. #.(PROGN (PUSH '(RESET-SLASH-SYNTAX) END-OF-FILE-EVAL-QUEUE) (SET-SLASH-SYNTAX) NIL) [There is a slight problem with sub-loading and the dynamic scope of readtable et. al.] By the way, isn't read-time evaluation considered *problematic*?  Date: 25 JAN 1981 1232-EST From: BAK at MIT-AI (William A. Kornfeld) Re: /\ vote I favor the change. As to the argument about old code breaking, well that happens to me often enough anyway. I occasionally change definitions of private macros and utilities whose effects percolate to old code from time to time. I agree with RWK that // is confusing to beginners as well as being a constant annoyance to seasoned hackers. We should come up with a decision procedure for deciding whether to go ahead with this change since it seems opinions are divided.  Date: 25 JAN 1981 2259-EST From: Moon at MIT-AI (David A. Moon) To: DLA at MIT-EECS cc: LISP-FORUM at MIT-AI Re: /\ Mess, upward compatibility I was assuming that if this change is made, in the Lisp machine there will be a new -*- entry, Syntax:Lisp meaning the new standard, Syntax:Oldslash meaning the old way. This would minimize the immediate need for painful file conversion. We could even defer making Syntax:Lisp the default for a while. This would affect both the Lisp reader and the editor. This Syntax: property is scheduled to be installed anyway, to ease other syntax extensions, mostly by users rather than the system. Unfortunately things aren't so easy in the case of Maclisp and Emacs.  Date: 26 January 1981 04:40-EST From: Robert W. Kerns To: GJC at MIT-MC cc: LISP-FORUM at MIT-MC Re: /\ Date: 25 January 1981 10:01-EST From: George J. Carrette Subject: /\ To: LISP-FORUM at MIT-MC For maclisp users, an unwind-protected END-OF-FILE-EVAL-QUEUE would be sufficient to set up local modes of all kinds, since Emacs doesn't actually do a lisp read on the file. #.(PROGN (PUSH '(RESET-SLASH-SYNTAX) END-OF-FILE-EVAL-QUEUE) (SET-SLASH-SYNTAX) NIL) [There is a slight problem with sub-loading and the dynamic scope of readtable et. al.] By the way, isn't read-time evaluation considered *problematic*? Actually, the next LISP (XLISP) has two new variables: FILE-EXIT-FUNCTIONS and FILE-EXIT-FUNCTIONS-DEFAULT. Each LOAD binds FILE-EXIT-FUNCTIONS to FILE-EXIT-FUNCTIONS-DEFAULT. These are lists of functions of one argument. When the file is finished, each function is popped off the list and called on an argument of (). If the file is exited abnormally (^G or *THROW), it is called on the file array. (Mostly because this is the only useful non-null object I could think of, but it does let you report the filepos that got an error). There are a number of other things to do with this besides local modes. One of the more important is to defer setting the VERSION property of a file (which says whether it has previously been loaded by MacLisp convention) until the file has been fully loaded, thus preventing ^G'd loads from breaking things due to half-loaded files. Another common case is to perform some setup action after all forms in the file have been performed, such as flavor compilation, etc. hDate: 1 October 1980 16:40-EDT From: Jon L White To: BARMAR at MIT-MC, ALAN at MIT-MC, KMP at MIT-MC, LISP-FORUM at MIT-MC, BUG-LISP at MIT-MC, Miller at MIT-AI, Guy.Steele at CMU-10A cc: JHendler at BBNA, JShelton at BBNA, TBlocker at BBNA, Pattermann at BBNA, Kehler at BBNA Re: Use of strings in PDP10 MACLISP There has been a lot of commentary on the two notes reproduced below and I've copied it to a file MC:LSPMAI;STRING BARMAR for reference. Date: 28 September 1980 20:22-EDT From: Barry Margolin Subject: "strings" To: BUG-LISP at MIT-MC Why does (equal "a" "a") return NIL? Date: 29 SEP 1980 0910-EDT From: KMP at MIT-MC (Kent M. Pitman) The primary use of "..." is to get an object which prints a certain way... "..." is defined on non-Multics Maclisp to return an UNINTERNED SYMBOL which has been SETQ'd to itself (so that it self-evaluates). Sad to say, no one seems to have thought of telling BARMAR to use the macLISP STRING package. The initial setting of the " macro in PDP10 MacLISP is indeed for a minimal kind of compatibility, but for a real STRING implementation, one should load in the out-of-core package (LOAD '((LISP) STRING FASL)) Some minimal documentation can be found in the source files MC:NILCOM;STRING >, MC:LSPSRC;EXTEND >, and MC:LSPSRC;EXTMAC >. Similar files exist on the MacLISP distribution to TOPS-10/20 sites.  Date: 2 October 1980 00:25-EDT From: Barry Margolin To: BUG-LISP at MIT-MC, LISP-FORUM at MIT-MC Re: MacLisp strings Thanks for all the replies. I did not mean to start a controversy, but I'm sort of glad I did, because of the incompatibilities between the various dialects of MacLisp. Also, just for your information, what started this flamage was an EECS MacLisp user who called to ask why it didn't work. I figured out that "samepnamep" was the right function, but I was just wondering why that was necessary. I had also gotten into the habit of thinking that "equal" was the general function to use if one wanted to know if wo objects looked alike. I am also glad I sent this in, as I now know the true story of how ITS MacLisp groks strings. -Barmar 8Date: 12 February 1981 2335-EST (Thursday) From: Guy.Steele at CMU-10A To: bug-lispm at MIT-AI, nil at MIT-MC cc: lisp-forum at MIT-MC Re: Getting act together on strings I observe that as a general rule string functions specifying substrings use a start position and end position on the LISP Machine, but in NIL theytake a start position and a count. Inasmuch as all the function names got chosen differently too that's not so bad, I guess... I think LM has STRING-EQUAL and NIL has STRING-EQUALP, LM has SUBSTRING and NIL has STRING-SUBSEQ, etc. WHat a confusing mess! If that's not enough, the function STRING does different things on the two machines (though I think they could be merged compatibly). Maybe it's too much to ask to make the LISPs completely compatible, but let's start on one small area at a time. How about some dialogue on what to do about the current situation? In particular, I'd like to hear some rationale for why each group did the string functions the way they did; for example, do they stem from architectural considerations? --Guy  Date: 12 February 1981 2351-EST (Thursday) From: Guy.Steele at CMU-10A Re: SLightly unwarranted flame My apologies; I was working from slightly outdated documents. Evidently NIL has changed its definition of STRING to be compatible with the LISp Machine's. The complaints about count versus end-position still hold, I think. --Guy  Date: 13 February 1981 00:57-EST From: Daniel L. Weinreb To: LISP-FORUM at MIT-AI, Guy.Steele at CMU-10A Re: Getting act together on strings (To prevent duplication, I'm just sending this to LISP-FORUM.) The reason for the Lisp Machine's string functions using an END instead of a COUNT has nothing to do with architectural considerations. It's simply been my experience that I very nearly always happen to have a variable whose value is the last character of something, rather than a variable whose value is the number of characters following some other point. When I was using PL/I, which uses COUNTS, I very nearly always had to compute the count by subtraction. I think this experience reflects the increasing use of "free-format fields" rather than "fixed-format fields" in our software design; we use delimiter characters and tokens rather than putting things in columns 45 through 49.  Date: 13 February 1981 20:50-EST From: Jon L White Re: STRING subsequencing functions, etc Date: 12 February 1981 2335-EST (Thursday) From: Guy.Steele at CMU-10A I observe that as a general rule string functions specifying substrings use a start position and end position on the LISP Machine, but in NIL theytake a start position and a count. . . . LM has SUBSTRING and NIL has STRING-SUBSEQ, etc. This "general rule" is a false cognate -- NIL compatibly has all the LISPM string-specific functions except NSUBSTRING (yes, I'm aware that this wasn't mentioned in MC:NIL;NEWFUN > -- that document is a bit out-of-date, and for that past year we've been coding so much that there seems to be little interest in keeping it fully up-to-date.) In addition, there are all the newly-invented generic sequence functions, whose names don't "cross paths" with the LISPM names; it is the latter series which has the PL/I convention for substring denotation. Thus, STRING-SUBSEQ is just SUBSEQ restricted to STRINGs, and differs from SUBSTRING only in the way of specifying the substring length. I think LM has STRING-EQUAL and NIL has STRING-EQUALP, . . . I don't think NIL has STRING-EQUALP really; it does have STRING-EQUAL just as on the LISPM. For a long time, there has been a lobbying effort here (and I believe from you too, GLS) to insure that EQUAL doesn't use this "friendly function" when applied to strings -- EQUAL shouldn't ignore case. The NIL function which is almost equivalent to the case-sensitive comparison is STRING-MISMATCHQ. If that's not enough, the function STRING does different things on the two machines (though I think they could be merged compatibly). There isn't a definition for STRING in NIL yet -- but there is TO-STRING, which does essentially the same thing; in fact there are a bunch of other functions TO-, for various values of , and these were discussed about a year ago under the notion of "Coercion functions". One bad feature of the LISPM definition for STRING is that it is not symmetric with the function LIST, VECTOR etc which take n arguments and compose them into a list, vector, etc respectively; by this analogy, one would think that (STRING ~A #/b 67. 'D) should yield "AbCD" NIL's decision to use a COUNT indicator for sub-sequences, rather than and BOUNDARY was influenced partly by other languages and partly by the format of some existing machine instructions which take a COUNT -- e.g., all the string instructions on the VAX take an operand which is a "length count", and the move-characters instruction of the 370 also does this. Thus, STRING-SEARCHQ could be open-coded for the VAX using the MATCHC instruction, but STRING-SEARCH (as defined in the LISPM) would require the BOUNDARY/COUNT patch-up at all usages. As it happens, since the LISPM definition for STRING-SEARCH used to use a case-ignoring comparison, it couldn't have been open-coded anyway.  Date: 15 February 1981 1955-EST (Sunday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: Foot-in-mouth disease Well, I suppose that most of my remarks about NIL and strings need to be retracted in the light of JONL's recent note about the state of NIL. I can say that my remarks were made in good faith on the basis of the most recent documentation available to me; and I cannot slight NIL for its lack of up-to-date documentation inasmuch as it hasn't been released yet. (I do fervently hope, however, that documentation will accompany the release; and I predict that this will be very difficult to do if the documentation doesn't proceed in parallel.) In any case, I did get the results I was looking for: NIL uses a count for compatibility with other languages and with architectures, and LISPM uses start/end because counts have been observed to be more awkward and because LISPM is not constrained by the architecture of the VAX or S-1 (which also uses counts) or whatever. If DLW is right that end positions are more available than counts anyway, then the subtraction JONL fears will turn up anyway: (STRING-SUBSEQ FOO START (- END START)) rather than (SUBSTRING FOO START END) ;compilation requires a subtraction so the merit of the argument for eliminating a subtraction is unclear; however, the stylistic arguments are also unclear. Everything is unclear. Sigh. Thanks to all for bearing with me. --TGQ (The Gronked Quux)  Date: 15 February 1981 20:05-EST From: Jon L White To: GLS at MIT-MC cc: LISP-FORUM at MIT-MC Re: Fence posts? Date: 15 February 1981 1955-EST (Sunday) From: Guy.Steele at CMU-10A . . . If DLW is right that end positions are more available than counts anyway, then the subtraction JONL fears will turn up anyway: (STRING-SUBSEQ FOO START (- END START)) rather than (SUBSTRING FOO START END) ;compilation requires a subtraction . . . This thought has occurred to us, that despite one's best efforts, he's going to find that he has a Left-foot shoe when he wants a Right. However, I took DLW's remarks to be personal stylistic preference, since it's so hard to get data on just which case is more frequent/convenient. At least for some of the overlapping functions, one will have, in NIL, a choice. By the bye, when did the late "Great" become a "Gronked" Quux?  Date: 15 February 1981 2016-EST (Sunday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: Gronked Quux I seem to have a cold this weekend, that's all, and I thought perhaps my last note was a bit incoherent as a possible consequence. --TGQ  Date: 8 December 1980 14:23-EST From: Kent M. Pitman Here's more data on an oddity originally introduced by ROD@SAIL in mail to BUG-LISP. He pointed out that in Maclisp, (setq x '(a b) y '(a b)) => (a b) (eq x y) => NIL (subst 'foo x (list y y y)) => (FOO FOO FOO) As he points out, the definition of SUBST in the Maclisp manual is: (subst x y z) substitutes x for all occurrences of y in z, and returns the modified copy of z. The original z is unchanged, as subst recursively copies all of z replacing elements eq to y as it goes. If x and y are nil, z is just copied, which is a convenient way to copy arbitrary list structure. ... subst could have been defined by: (defun subst (x y z) (cond ((eq z y) x) ;if item eq to y, replace. ((atom z) z) ;if no substructure, return arg. ((cons (subst x y (car z)) ;otherwise recurse. (subst x y (cdr z)))))) The problem is in fact much more far-reaching than he described, and there seems to be little agreement, so I thought I would present the rest of the story for comment... Here's what a sampling of the other dialects turns up... Test case: (setq new '(new list)) (setq old '(a b)) (setq test '((a b) a b)) (subst new old test) In ITS Maclisp, back at least to Maclisp 1293, (subst new old test) => ((NEW LIST) NEW LIST) (eq (car *) new) => T This contradicts documentation in the manual. In Multics Maclisp, (subst new old test) => ((a b) a b) ; all copied structure In LispMachine Lisp, (subst new old test) => ((NEW LIST) A B) (eq (car *) new) => T This contradicts documentation in the manual. Their documentation is essentially equivalent to that of Maclisp. Franz Lisp has no SUBST function listed in their manual. In Interlisp, according to their documentation (I have no way to test this), (subst new old test) => ((NEW LIST) A B) (eq (car *) new) => NIL ; excuse the Maclisp notation For the curious, Interlisp documents its subst as follows... subst[new;old;expr] Value is the result of substituing the S-expression new for all occurrences of the S-expression old in the S-expression expr. Substitution occurs whenever old is EQUAL to CAR of some sub- expression, or when old is both atomic and not NIL and EQ to CDR of some subexpression of expr. For example: subst[A;B;(C B (X . B))] = (C A (X . A)) subst[A;(B C);((B C) D B C)] = (A D B C), not (A D . A). The value of SUBST is a copy of expr with the appropriate changes. Furthermore, if new is a list, it is copied at each substitution.  Date: 8 December 1980 14:35-EST From: Kent M. Pitman To: LISP-FORUM at MIT-MC cc: rod at SU-AI Re: SUBST Please cc any discussion on subst to ROD@SAIL. I should have CC'd him on my last note; he's not on LISP-FORUM. -kmp  Date: 9 December 1980 14:48-EST From: Daniel L. Weinreb Sender: dlw at CADR6 at MIT-AI To: KMP at MIT-MC, LISP-FORUM at MIT-MC It appears that Multics Maclisp is doing the documented thing. The Lisp Machine is doing the right thing except that EQUAL is being used where EQ should be used. Amusingly, the function NSUBST (the "destructive" version of SUBST in the Lisp Machine) uses EQ! The Lisp Machine system software uses SUBST in seven places. Six of them (once in DISPLACE, twice in the microassembler, once in ZWEI, once in DEFSTRUCT, and once in the compiler) are (SUBST NIL NIL ...). The seventh is an ancient macro in the console diagnostic program, which uses a symbol as the second argument. There are also all the DEFSUBSTs, which use a symbol as one argument. Therefore, changing this will not affect any of the installed LispM software. However, it might break user software. In particular, I am worried about Macsyma. I would like to see the Lispm changed to use EQ as documented, but I would like to hear whether Macsyma will break, and to hear what the other LispM system people say.  Date: 9 December 1980 15:12-EST From: Kent M. Pitman To: DLW at MIT-MC cc: LISP-FORUM at MIT-MC Re: EQ(UAL) in SUBST The general problem with using EQ is that in Maclisp, flonums and many fixnums which are '=' are not necessarily EQ. While it is fine for the LispM as an independent language to use EQ, but since EQ is itself a very different operation on the LispM, I'm not sure if this is a good idea. The notion of EQUALness is much more general across implementations than is the notion of EQness. I suggest that perhaps we could win with a compromise such one that called EQUAL on atoms and EQ on lists. This would make SUBSTing for flonums, fixnums, strings, etc behave more uniformly across dialects. I agree that checking lists for EQUALness is an un-necessary slowness. There is also the other problem of whether (SUBST 'FOO '(A B) '((A B) A B)) should return (FOO A B) or (FOO . FOO). This, it seems to me, is a far more severe issue since it plays a role in the existing definitions of SUBSTs and other similar mechanisms and code could conceivably broken in Maclisp or LispM depending on who changed and why. I would prefer if CAR's and CDR's were treated identically, but I can understand why others might find it more comfortable to ignore subtails. Whatever it does, however, I think uniformity would be important. Macsyma users use SUBST a lot, but we can shield them on that score since we can always write one that maintains whatever semantics we feel they need. Macsyma internal code using SUBST is trivially findable and we could adapt to any well-defined set of changes as well, so don't worry too much about that. Macsyma does have to run on Multics, too, so we'd have to be doing the adaptation anyway now that the potential bug has been pointed out. -kmp  Date: 9 December 1980 15:50-EST From: George J. Carrette I just scanned over the uses of SUBST in macsyma, with an interesting result. Except for uses of SUBST as (SUBST NIL NIL X), almost all probably violate macsyma data abstraction or are prone to subtle bugs of other kinds. Many are due to a single programmer who hacked quite a few years ago. Rational function package: ; this uses a "special representation". ; macsyma "general representation" has its own SUBST function. (SUBST (CAR V) 'FOO X) (SUBST 1 'FOO X) (SUBST (SUB1 (CAR V)) 'FOO X) (SUBST (CAR (LAST GENVAR)) (CAR MINPOLY*) MINPOLY*) (SUBST '%I '$%I P) ;; this is a violation of macsyma data abstraction, which just happens ;; to work in most cases. GCD package: ;; an excerpt form a PROG, notice the COMMENT, which ;; is made up of PROG-TAGS. Neat eh? END (GETD0 PL(SETQ VALS(SUBST 1. 0. VALS))) WHAT WAS SUBST FOR? TAYLOR package: ; again, it manipulates the "special representation". (subst 'sp2var (cadr f) s) (subst *i (caddr e) e)))) (setq x (subst i (sum-index x) x)) i) Here is a jewel where a symbol is used which is the USERNAME of a macsyma user that uses the package in question: (SUBST 'ELL VAR NEWF) Series package: (setq sexp (subst var 'x sexp)) ;; presumably this depends on the fact that a USER cannot type ;; in a symbol X, since all user symbols get a dollar-sign ;; stuck on the front. -gjc  Date: 9 December 1980 16:46-EST From: Kent M. Pitman To: LISP-FORUM at MIT-MC cc: GJC at MIT-MC Re: Statistics on Macsyma use of SUBST A by-hand survey says that things break down this way: 77 substitutions for symbols 4 substitutions of NIL for NIL (2 macros + 2 isolated occurrences) 1 substitution for (CONS ...) 1 substitution of 1 for 0 Many of the substitutions for symbols do break data abstraction, as GJC mentions, and of those that don't, many are just places where backquote would have been used had it been available at the time of the code's development (eg, JM's SIN has at least one such occurrence). This data is based on what I believe are the expectations on things. eg, (SUBST foo var x) or (SUBST var v x) were assumed to be substitutions for symbols. I suspect, really, that SUBST is too powerful for most of our applications. We have been screwed more than once by SUBST poking around in parts of expressions that we should never have allowed it into... ('we' here meaning the Mathlab group). We probably oughtn't be using it as much as we do... -kmp  Date: 10 December 1980 1134-EST (Wednesday) From: Guy.Steele at CMU-10A To: KMP at MIT-MC (Kent M. Pitman) cc: lisp-forum at MIT-MC, Scott.Fahlman at CMU-10A Re: EQ(UAL) in SUBST I think we've needed such a "halfway" euality function for years, and it crops up now and again. So how about it, guys? I'd be inclined to call it EQL (more general than EQ, less general than EQUAL (for a funny sense of "general")). However, InterLISP has had such a function for a while and called it EQP. Whatever we call it, it should be like EQUAL on non-composite objects and like EQ on composite ones (I say "non-composite" rather than "atomic" because there is a question as to whether a vector or array is an atom for this purpose; I suggest that EQ be used on lists, vectors, arrays, etc.). Regarding SUBST: should SUBST traverse vectors and the like?  Date: 10 December 1980 21:36-EST From: Kent M. Pitman To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC Re: EQP If we copy EQP's exact semantics, we should call it that to maximize compatibility. If someone is unhappy with some aspect of its definition, EQL is an ok name (minimizes incompatibility). I like the idea of having such a thing, though. -kmp  Date: 11 December 1980 02:23-EST From: Daniel L. Weinreb Sender: dlw at CADR7 at MIT-AI To: Guy.Steele at CMU-10A cc: Scott.Fahlman at CMU-10A, lisp-forum at MIT-MC Re: EQ(UAL) in SUBST I agree, we should have such a function. I would prefer the name EQP; after all, it is a predicate, and since the name has already been used for this by Interlisp, calling it EQL would be gratuitously incompabible.  Date: 11 December 1980 1433-EST (Thursday) From: Guy.Steele at CMU-10A To: Kent M. Pitman cc: lisp-forum at MIT-MC, Scott.Fahlman at CMU-10A Re: EQP Well, I find from the 1978 InterLISP manual that EQP compares numbers of unlike type: EQP[2000;2000.3] => T (their example and syntax--sorry). I had assumed that the new predicate we want would always be () if the operands were of unlike type. Theerfore I propose the new function EQL: (DEFUN EQL (X) xxxxxxxx (DEFUN EQL (X Y) (OR (EQ X Y) (AND (EQ (TYPEP X) (TYPEP Y)) (NUMBERP X) (EQUAL X Y))))  Date: 11 December 1980 14:57-EST From: Robert W. Kerns To: GLS at MIT-MC, LISP-FORUM at MIT-MC, Scott.Fahlman at CMU-10A Re: EQP/EQL Better definition: (DEFUN EQL (X Y) (OR (EQ X Y) (AND (NUMBERP X) (EQUAL X Y)))) Remember, in our dialect, at least, EQUAL doesn't consider 2.0 and 2 to be equal. Does INTERLISP? If not, then their EQP isn't even a 'Smart EQ' directly between EQ and EQUAL on the spectrum.  Date: 11 December 1980 15:42-EST From: Kent M. Pitman To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Re: EQL It seems to me that the definition should include the TYPEP check; optimizations such as omitting it as unnecessary can be done as long as they preserve correctness. The case that I wonder about is some random extend datatype, call it FAKE, which impersonates fixnums. It might want to claim EQL-ness to a fixnum. TYPEP comparisons might screw this case while EQUAL might successfully call an EQUAL method in FAKE to determine equality ...? If this is so, then perhaps that's another reason to omit the TYPEP check...  Date: 11 December 1980 16:29-EST From: Robert W. Kerns To: KMP at MIT-MC, LISP-FORUM at MIT-MC I should have been a trifle more explicit in my reasons. I think that the behaviour DOES want to behave similarly to how EQUAL behaves on numbers, not entirely different. That's why I think it should not do the type-check itself, but should call EQUAL. I.e. if EQUAL works on funny fake-nums, (and of course, NUMBERP), then so should EQL or EQP or whatever. It's a matter of consistancy, not 'optimization'. Date: 31 October 1980 08:40-EST From: Jon L White To: GSB at MIT-MC, ALAN at MIT-MC cc: LISP-FORUM at MIT-MC, jerryb at MIT-AI Re: SXHASH I presume that the current way to reach the union of BUG-LISP and BUG-LISPM is LISP-FORUM. If anyone got this msg who didn't want such unions, he probably should remove himself from LISP-FORUM; if anyone who wants to see BUG-LISP/LISPM msgs didn't get this note, he should be put on LISP-FORUM. Having said that, now let me review the (miserable) history of SXHASH, due to GSB@MIT-ML 10/31/80 02:37:58 Re: sxhash There is nothing in the "definition" of sxhash which says that it should be insensitive to order in a list. This has in fact been considered a bug in Maclisp. The constraint however is that the hash should remain constant in a given lisp implementation for "all time". . . . Date: 31 October 1980 02:29-EST From: Alan Bawden . . . Is it too late to change it in MacLisp? (I'll bet the answer to that one is "yes".) The only "constraint" that we have acknowledged (QUUX, myself, Ira Goldstein, and many others about 7 years ago) was that SXHASH should be reasonably definable by primitive-recursion, which is to say that the SXHASH of a cons cell should be some trivial combination of the SXHASH's of the car and of the cdr of that cell. In fact, contrary to ALAN's conjecture, it has never been too late to change the definition of SXHASH, and it was changed twice --- once to accommodate the "primitive-recursive" desire. There has never been any desire to have differing implementations of MacLISP agree on the value of SXHASH, but it would be nice if an EXPR-coded version of SXHASH would maintain its programmatic characteristics regardless of which inplementation was supporting it. The factors affecting a change is that many FASL files have been assembled with sxhash's stored in them (about the only non-trivial usage of SXHASH which could even notice a change between LISP version numbers); future incorrectness in these number would not break anything, but only slow down the fasload process, and perhaps require extra space for storing "quotified constants". Similarly, the EXPR-HASH "feature" tries to stop a DEFUN from happening if the symbols EXPR-HASH property (from a previously compiled version) is the same as the current sxhash of the expr code; thus this feature would be broken for old FASL files by a change to sxhash, but the loss is only one of address-space, not one of incorrect code. The solution in each case is the same: put up with slowness until you recompile old fasl files. The first midas-coded version of SXHASH did not have the "primitive" property, but was some kind of data-walker which just bash'd and rotl'd a register. The next version was something like (DEFUN SXHASH-PAIR (X) (+ (ROT (SXHAXH (CAR X)) -1) (SXHASH (CDR X)))) which has the unfortunate symmetry noted. But the possible definition (DEFUN SXHASH-PAIR (X) (+ (SXHASH (CAR X)) (ROT (SXHAXH (CDR X)) -1))) has the symmetry in the car direction rather than the cdr direction; so the latter would at least not actually enforce symmetry, as the former surely does, for LISP lists -- its commutativity would be less appearant. How important is it to change? The price really isn't that high (except momentarily for MACSYMA, where even one extra segment of data is critical) but then the gain doesn't seem to be that high either.  Date: 31 October 1980 10:01-EST From: George J. Carrette To: JONL at MIT-MC cc: ALAN at MIT-MC, GSB at MIT-MC, LISP-FORUM at MIT-MC, jerryb at MIT-AI Re: SXHASH JONL, since you mentioned Macsyma, I'd like to point out that since the HASH that one wants to use depends heavily on the EQUAL in use, Macsyma uses its own definition of HASH, and uses SXHASH only to get that HASH for SYMBOLS. Also, how fancy the hash wants to be depends on what it is used for, and the relative expense of the EQUAL algorithm for the objects in use. In other words, I don't think any good programmer who knows and cares about the various hashing methods etc., should depend on the behavior of a built-in like SXHASH anyway. (Except insofar as the SXHASH is defined to do some kind of class dispatching for hashing the sub-nodes of a node). -gjc p.s. For writing ones own hash function it would be *nice* if (CASEQ (TYPEP X) (SYMBOL ...) (FOOBAR ...) ) turned into code which, well, I'm sure you can guess.  Date: 31 October 1980 1030-EST (Friday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: SXHASH I was under the impression that MACSYMA kept files on disk that contained SXHASH values, and that this was the main reason for imposing the rule that the implementation of SXHASH not change. If this is not or is no longer true, then there is no reason not to "fix" SXHASH, other than the possibly considerable initial inconveniences you noted with regard to FASL files. Rotating the CDR instead of the CAR would help. If you assume the ROT merely divides the hash value by 2 (which it tends to do for leaves of the tree being hashed which are symbols with short print names -- a ROT by 1 would be better for this, but less good for fixnums), and if you assume the addition never overflows (these are pessimal assumptions), then two leaves are currently (resp. under proposed change) distinguished iff the paths from them to the root follow the same number of CAR (resp. CDR) chains, *regardless* of the number of CDR (resp. CAR) chains. Now in a list, every leaf except the terminating () is one CAR chain away from the root (and some number of CDRs). It would seem better to make the "weight" of a leaf at the root depend on both the number of CARs and number of CDRs. Therefore I would propose to rotate both the CAR and the CDR, preferably by amounts that are relativelylarge and relatively prime. So I would suggest a function like (+ (ROT THE-CAR 11.) (RTHE-CDR 7)) or something. WHat we really want is to find three operators $, %, and #, the first two being unary and the third binary. The function would then be (# ($ THE-CAR) (% THE-CDR)). WHat properties should these have? Ideally, $ and % would not distribute over #, or if they did then at least $ and % should not commute ($%X shouldn't equal %$X -- of course, the ROT suggestion above doesn't satisfy this). The non-commutation property means that the CDAR and CADR of a hashed tree are distinguished (by which I mean that exchanging those two subtrees will alter the hash if the subtrees have different hashes). Perhaps someone would care to pursue a theoretical approach further.  Date: 31 October 1980 11:02-EST From: Jon L White Re: Poor ROTten SXHASH CC of reply to GLS: I liked you analysis -- probably ROT-car-11/ROT-cdr-7 would keep every user happy. Maybe we should probe the community on the desirability of a change? One more thing though -- in NIL, there are vectors with n components so how to distinguish them? would it be something like (DO ((I (1- (VECTOR-LENGTH V)) (1- I)) (RESULT )) ((< i 0) RESULT) (SETQ RESULT (+ (ROT RESULT -1) (SXHASH (VREF V I)))) or maybe the last line could even include in the index I somehow (+ (ROT RESULT -1) (randomify I) (SXHASH (VREF V I)))  Date: 31 OCT 1980 1349-EST From: BAK at MIT-AI (William A. Kornfeld) Re: Lispm SXHASH loses worse than you think. Another problem with the Lispm SXHASH is due to its using LOGXOR. The result of this is that pairs of things (pairs of changes) are ignored. Examples: (SXHASH '(A)) = (SXHASH '(X A X)) = (SXHASH '(X X A X X)) (SXHASH '(PLUS FOO BAR)) = (SXHASH '(PLUS TV-FOO TV-BAR))  Date: 31 October 1980 1502-EST (Friday) From: Guy.Steele at CMU-10A To: JONL at MIT-MC (Jon L White) cc: lisp-forum at MIT-MC Re: Poor ROTten SXHASH The hash function you suggest for vectors is probably adequate. I don't think that adding in a funny function of I helps. I went down that path some time ago when analyzing the function. If you remember that + is commutative and associative, then it becomes clear that adding in (randomify I) on every cycle is the same as adding in (SUM I 0 N (randomify I)) = (randomify1 N) at the end of the loop. (Ooops, that's not quite right -- the result gets ROT'd at each step -- but the principle is roughly the same.) So it's not really as good an extra randomization as one might think. I recommend ROT'ing by more than 1 though -- maybe by 11 or so -- so as to cause more "edge interactions" (get the former ends of the word near the middle so additive carries will muddle things up; this reduces the probability of the + really being associative). Date: 19 October 1980 05:54-EDT From: Jon L White Re: Unsolicited Plaudits On September 25, 1980, Mr. Shigeki Goto of the Electical Communication Laboratories (Nippon Telegraph and Telephone Co., in Tokyo) sent me a note of his results when comparing the timings of two of his functions when run under ILISP, INTERLISP, and MacLISP. His machine, I believe, is a 20/50 running TOPS-20. Here is an excerpt from his note (between doublequotes): " Mr. Nobuyasu Ohsato, one of my colleagues at Musashino ECL, has compared the execution speed of various LISP systems. The execution time in the table is shown in milliseconds. __________________________________________________________________ | | Program | UCILISP | INTERLISP | MACLISP | |-------------+-----------------+---------+-----------+---------| | | TARAI-4* | 57.0 | 26.0 | 22.8 | | |-----------------+---------+-----------+---------| | Interpreter | numerical SORT | 53.9 | 63.7 | 55.0 | | | of (1 2... 100) | | | | |-------------+-----------------+---------+-----------+---------| | | TARAI-4* | 2.90 | 15.0 | 0.69 | | |-----------------+---------+-----------+---------| | Compiler | numerical SORT | 5.62 | 22.8 | 1.46 | | | of (1 2... 100) | | | | |-------------+-----------------+---------+-----------+---------| (*) TARAI-4 is (TAK 4 2 0), where TAK is an interesting function defined by Mr. Ikuo Takeuchi. (DEFUN TAK (X Y Z) (COND ((GREATERP X Y) (TAK (TAK (SUB1 X) Y Z) (TAK (SUB1 Y) Z X) (TAK (SUB1 Z) X Y) )) (T Y) )) " I ran his TARAI-4 example on the MACSYMA Consortium KL-10, and observed a very-slightly faster timing for the interpreter's run -- 24.4 milliseconds -- so I suspect his machine is about 5% to 10% faster than MC. But for the compiled-code's run, I clocked only .556 millisecond, rather than .690; possibly he merely overlooked the overhead of "setting-up and getting into" the computation to be timed, which omission is negligible when it is .134 out of 22.8, but not so insignificant when it is .134 out of .690. Yet, the most interesting observation is that the MacLISP-based computation for TARAI-4 is about four times faster than the ILISP-based one, which itself is about four times faster than the INTERLISP one. (Evidently, when compiling the TAK function, the FIXNUM declaration was given, so that GREATERP and SUB1 therein were open-coded). In fact, running in the slow check-everything mode (as opposed to the faster-but-less-error-checking mode), intrepreted MacLISP takes only 3/2 the time of ** compiled ** INTERLISP. -- JonL -- 10/19/80  Date: 21 Oct 1980 09:10:59-PDT From: CSVAX.fateman at Berkeley To: lisp-forum at mit-mc cc: CSVAX.jkf at Berkeley Re: franz times franz times ___________________________________________________________________________ | | Program | UCILISP | INTERLISP | MACLISP |Franz/VAX| |-------------+-----------------+---------+-----------+---------+---------| | | TARAI-4* | 57.0 | 26.0 | 22.8 | 73.0 | | |-----------------+---------+-----------+---------+---------| | Interpreter | numerical SORT | 53.9 | 63.7 | 55.0 | | | | of (1 2... 100) | | | | | |-------------+-----------------+---------+-----------+---------+---------| | | TARAI-4* | 2.90 | 15.0 | 0.69 | 5.3, 4.1|** | |-----------------+---------+-----------+---------+---------| | Compiler | numerical SORT | 5.62 | 22.8 | 1.46 | | | | of (1 2... 100) | | | | | |-------------+-----------------+---------+-----------+---------+---------| (*) TARAI-4 is (TAK 4 2 0), where TAK is an interesting function defined by Mr. Ikuo Takeuchi. (DEFUN TAK (X Y Z) (COND ((GREATERP X Y) (TAK (TAK (SUB1 X) Y Z) (TAK (SUB1 Y) Z X) (TAK (SUB1 Z) X Y) )) (T Y) )) (**) 5.3 with (1- x) etc [no other declarations, so greaterp is closed comp.] 4.1 with local function declaration (fast subroutine call) times on VAX 11/780 at Berkeley I understand that the interlisp times can be much improved by using igreaterp and such-like, so it is not clear that maclisp is quite so miraculous. --fateman  Date: 22 Oct 1980 0319-PDT From: Scott J. Kramer Re: Time and time again Date: 21 Oct 1980 1944-PDT From: CSD.KAPLAN at SU-SCORE Subject: Lisp immeasurements To: bboard at SRI-KL Since everyone is posting messages about LISP comparisons, I thought I would get into the act... --------------- Date: 20 Oct 1980 1039-PDT From: CSD.DEA at SU-SCORE (Doug Appelt) Subject: INTERLISP Timing measurements To: BBOARD at SU-SCORE, BBOARD at SU-AI It seems as though it is "common knowledge" that INTERLISP is slow and inefficient, and people occasionally come up with data to "prove" that point, and show INTERLISP is 20 time slower than its nearest competition. What they usually succeed in proving is that they don't understand INTERLISP. I am referring in particular to the recent measurements in the message distributed by JONL. I suspected that someone didn't know what they were doing here, so I did some checking of my own. Nothing fancy, and no particular attempt to factor out measurement noise was made. I feel that a fair comparison between the LISPS should be made in which each LISP uses the same mode of compilation. INTERLISP's default mode of compiling is geared to convenience and flexibility, while MACLISP always produces fast, but more difficult to handle compiled code. It's possible to make INTERLISP produce efficient compiled code as well. Here are the timings I got with my quick measurements on the TAK function: Interpreted 23ms Compiled (default mode) 16ms Compiled with (LOCALVARS . T) 9ms Block compiled 2ms The fastest time compares favorably with UCI LISP, and is in the same order of magnitude as the MACLISP timings. ------- --------------- ------- Date: 21 Oct 1980 2055-PDT From: Boyer Subject: TAK in block compiled Interlisp To: BBOARD, WILKINS, JONL at MIT-MC, SCOTT, CSVAR.FATEMAN at BERKELEY I tried out (TAK 4 2 0) in Interlisp on our KL here, and I got 2.0 for block compiled code instead of the awful 22.8 that Mr. Goto got on his 20-50. (That's still pretty far from the .69 for Maclisp but better than the 2.90 for UCILISP). I suspect that Mr. Goto didn't use the block compiler, but only used the ordinary Interlisp compiler. I also used the open compiled IGREATERP instead of GREATERP. The test was done with the Interlisp TIME function, and I ran the computation 1000 times, thereby reducing error due to getting set up. For those not familiar with Interlisp, I should mention that block compilation of Interlisp is what one uses to get speed and it's not much more difficult to generate. If you break ordinary compiled code, you can find out what fns have been called, what the args are, etc. If you break block compiled code, you can't find out much. Also, in block compiled code you must worry about which vars are special and which fns can be returned (thrown) to.  Date: 22 October 1980 11:08-EDT From: Jon L White To: BOYER at SRI-KL, CSD.DEA at SU-SCORE, MASINTER at PARC-MAXC2, TYSON at UTEXAS-11 cc: LISP-FORUM at MIT-MC, LISP-DISCUSSION at MIT-MC, BBOARD at SRI-KL, BBOARD at SU-AI, BBOARD at SU-SCORE Re: Response to Goto's lisp timings As Larry Masinter mentioned in his comment on the Goto timings, comparisons between LISPs are likely to generate more heat than light; but the replies did throw a little more light on some things, especially the runtime variabilities of an Interlisp program, and I thought I'd summarize them and pass them along to the original recipients of the note. However, I'd like to say that the general concern with speed which I've encounterd in the past has been between MacLISP and FORTRAN, rather than with some other lisp; and several Japanese research labs have done AI research still in FORTRAN. Just in case you're put off by looking at even more meaningless statistics, I'd also like to aprise you that following the little summary is a brief technical discussion of three relevant points disclosed by the TAK function (out of the many possible points at which to look). These technical points may be new to some of you, and even beyond the LISP question you may find them useful; the key words are (1) UUOLINKS, (2) Uniform FIXNUM representation, and (3) Automatic induction of helpful numeric declarations by a compiler. Almost everyone familiar with Interlisp recognized that the ECL people had not requested "block" compilation in the TARAI-4 example, and several persons supplied results from various 20/60's around: default compilation rewritten code, with correspondent timings block-compiled timing Date: 19 OCT 1980 2127-PDT 9.8ms 1.8ms From: MASINTER at PARC-MAXC2 Date: 20 Oct 1980 1039-PDT 16.ms 2.ms From: CSD.DEA at SU-SCORE (Doug Appelt) Date: 20 Oct 1980 at 2257-CDT 0.83ms (for UCILISP only) From: tyson at UTEXAS-11 2.9ms 15.0ms To: HGBAKER.SYMBOLICS at MIT-MC cc: LISP-FORUM at MIT-MC Re: What it may mean. Date: 22 October 1980 20:47 edt From: HGBaker.Symbolics at MIT-Multics On my Apple, (tak 4 2 0) takes 294 msec. This makes it about 100 times slower than Interlisp on a KL-10. What does it all mean? Well, for one thing, if the AppleLISP is 1000. times cheaper than Interlisp on a KL-10, there's going to be a lot of hackers badgering their brokers to by Apple stock.  Date: 26 Feb 1981 14:42:52-PST From: CSVAX.fateman at Berkeley To: CSVAX.jkf at Berkeley, jlk at mit-mc, lisp-forum at mit-mc, rz at mit-mc cc: CSVAX.fateman at Berkeley _________________________________________________________ | | UCILISP | INTERLISP | MACLISP |Franz/VAX| |-------------+---------+-----------+---------+---------| | Interpreter | 57.0 | 26.0 | 22.8 | 65.0 | |-------------+---------+-----------+---------+---------| | Compiler | 2.90 | 15.0 | 0.69 | 1.1 ** | |-------------+---------+-----------+---------+---------| Times are for (TAK 4 2 0), where TAK is an interesting function defined by Mr. Ikuo Takeuchi. (DEFUN TAK (X Y Z) (COND ((GREATERP X Y) (TAK (TAK (SUB1 X) Y Z) (TAK (SUB1 Y) Z X) (TAK (SUB1 Z) X Y) )) (T Y) )) (**) 5.3 with (1- x) etc [no other declarations, so greaterp is closed comp.] 4.1 with local function declaration (fast subroutine call) 1.1 with > open compiled times on a VAX 11/780 at Berkeley, Feb. 26, 1981  Date: 2 March 1981 00:55-EST From: Charles Frankston To: CSVAX.fateman at BERKELEY cc: LISP-FORUM at MIT-MC, masinter at PARC-MAXC, RWS at MIT-XX, guttag at MIT-XX Re: timings It is rather obvious that the timings you distributed are wall times for the Lisp Machine, whereas the Vax and MC times count only time spent directly executing code that is considered part of Macsyma. Ie. the Vax and MC times exclude not only garbage collection, but operating system overhard, disk i/o and/or paging, time to output characters to terminals, etc. I submit comparing wall times with (what the Multics people call) "virtual CPU" time, is not a very informative excercise. I'm not sure if the Lisp Machine has the facilities to make analagous measurements, but everyone can measure wall time, and in some ways thats the most useful comparison. Is anyone willing to try the same benchmarks on the Vax and MC with just one user on and measureing wall times? Also, are there yet any Lisp machines with greater than 256K words? No one would dream of running Macsyma on a 256K word PDP10 and I presume that goes the same for a 1 Megabyte Vax. The Lisp Machine may not have a time sharing system resident in core, but in terms of amount of memory needed for operating system overhard, the fanciness of its user interface probably more than makes up for that. I'll bet another 128K words of memory would not be beyond the point of diminishing returns, insofar as running Macsyma. Lastly, the choice of examples. Due to internal Macsyma optimizations, these examples have a property I don't like in a benchmark. The timings for subsequent runs in the same environment differ widely from previous runs. It is often useful to be able to factor out setup times from a benchmark. These benchmarks would seem to run the danger of being dominated by setup costs. (Eg. suppose disk I/O is much more expensive on one system; that is probably not generally interesting to a Macsyma user, but it could dominate benchmarks such as these.) I would be as interested as anyone else in seeing the various lisp systems benchmarked. I hope there is a reasonable understanding in the various Lisp communities of how to do fair and accurate, else the results will be worse than useless, they will be damaging. Date: 23 JAN 1981 1821-EST From: RMS at MIT-AI (Richard M. Stallman) What do you think of changing the syntax of TRACE to something like (TRACE function &rest options)? I think this would be less peculiar and more like all the rest of the system. For when you want to trace several functions at once, there could be a TRACE-MULTI which takes a list of functions and then one list of options, and traces all the functions with the same options.  Date: 24 January 1981 2128-EST (Saturday) From: Guy.Steele at CMU-10A To: RMS at MIT-AI (Richard M. Stallman) cc: lisp-forum at MIT-MC Re: Proposed change to TRACE My first reaction is that TRACE-MULTI is too long a name for something that is used almost exclusively interactively. This leads to the suggestion that TRACE be left alone, but for the simple case of a single function have (TR function &rest options).  Date: 26 JAN 1981 1253-EST From: DICK at MIT-AI (Richard C. Waters) Re: trace change I like the idea of leaving trace alone, and having a new fn TR with nicer shorter arguments. I would also like to stress the really good idea which underlies this IE that whenever possible improvements should be made as extensions rather than replacements. Dick Date: 7 Mar 1981 0947-MST From: Griss at UTAH-20 (Martin.Griss) To: lisp-forum at MIT-MC, lisp-discussion at MIT-MC cc: griss at UTAH-20 Re: Faculty Positions at Utah I would like to bring the following announcemnt to your attention, and would appreciate it if it could be forwarded to any likely prospects. (We are not specifically looking for "LISP" people, though I personally would be delighted to have an additional person interested in LISP Implementations, personal Machines, LISP extensions and LISP applications). Please contact me for further information [Griss@utah-20] ----------------------------------------------------------------------- UNIVERSITY OF UTAH Computer Science Faculty Positions The Department of Computer Science at the University of Utah seeks applications for the ranks of Assistant and Associate Professor. Both regular and visiting appointments will be considered. A successful candidate for Assistant Professor must earn the Ph.D. in Computer Science or a related field prior to December 1981. Candidates for Associate Professor must have, in addition, at least three years of teaching and research experience in Computer Science. The Department currently has 13 tenure-track faculty members, as well as an additional 4 serving in research capacity. The student population includes approximately 350 undergraduate majors, 55 Master's degree students, and 30 PhD students. The Department has an excellent complement of computing facilities, with usual conveniences like terminals in all offices, access via both the Arpanet and Telenet, and very powerful facilities for computer graphics, and document production. Research equipment includes a DECsystem 2060, Burroughs 1865, DEC PDP-10 KA, PDP-11/60, PDP-11/45, HP3000/33, E&S Picture System, Grinnell color frame buffer, a large variety of mini- and micro-computers, a Computervision CAD VLSI design system, and shortly a VAX/750. Output devices include a Barco high resolution color monitor, a four color plotter, and a Mergenthaler Omnitech/2000 Photocomposer. Special research facilities within the Department support projects in algebraic computation, computer-aided geometric design, data-flow and multiprocessor architectures, graphical software development tools, high-precision photographic work, sensory information processing, software portability, symbol manipulation systems, and text-searching machines. Additionally, the Department is the major contributor to a University facility for VLSI fabrication, which is now operational in its building. Starting date for the appointment is 1 July 1981. Direct vita, along with the names of three or more references, to: Professor Robert M. Keller Chairman, Faculty Search Committee Department of Computer Science University of Utah Salt Lake City, UT 84112 The University of Utah is an Affirmative Action, Equal Opportunity Employer. Date: 18 AUG 1980 0046-EDT From: EB at MIT-AI (Edward Barton) Re: Request for comments (not about characters and fixnums!) (Some of you have seen this before; apologies.) UNWIND-PROTECT allows you to make sure certain actions (such as the closing of a file) happen when a scope is exited regardless of whether that scope is exited in the normal flow of control, by *THROW, or by ^G quit. However, it requires that the expressions specifying the actions be written directly in the UNWIND-PROTECT form; it does not directly support figuring out "on the fly" what cleanup actions will need to be performed. For example, suppose you want the following particular organization of code: (defun F (x y) ... (open-a-bunch-of-files x (f y)) (munch-the-files-for-a-while) (close-the-files) ... ) Suppose further that the number of files opened and the location where the file objects are kept is dependent on various parameters and cannot be known when F is written. IOTA and PHI cannot be used in that case, yet some form of UNWIND-PROTECT is desirable to keep the files from remaining open in the event of a QUIT. Consider also another example. Suppose you want to explicitly indicate in the file itself that a certain file is always to be read using a particular reader syntax. It is tempting to write at the beginning of the file some expression that will set up the reader syntax (perhaps by loading a file where it is defined) and write at the end of the file some expression that will undo the changes since they were intended to be local. Again, though, some form of UNWIND-PROTECT is necessary if you are to insure that the changes are local even in the event of a QUIT during loading. As a final example, consider the following command loop: (defun command-loop () (loop as cmd = (read-a-command) until cmd = 'STOP do (funcall (get cmd 'command-server)))) Suppose you want it to be possible for the command servers to perform arbitrary actions, but with the assurance that (if the servers so request) various cleanup actions will be done whenever COMMAND-LOOP is exited by any means. I would like to propose a set of functions and forms, implemented with UNWIND-PROTECT, that allow you to handle cases like the above. This message is a request for comments on the general idea and the particular conventions proposed. (UNWINDING-SCOPE form1 ... formn) MACRO evaluates the forms and returns the last one, allowing calls to QUEUE-UNWINDING-ACTION while dynamically within that evaluation. Expands to an UNWIND-PROTECT surrounded by a lambda-binding of a specvar used as a queue. The restoration expression of the UNWIND-PROTECT takes actions from the queue and performs them in the order they were queued. (QUEUE-UNWINDING-ACTION when function ... args ...) LSUBR queues the given function to be APPLY'ed to the given arguments if evaluation of the forms terminates either normally or for some other reason (throw, quit, error). WHEN may be T (apply always), NORMAL (apply only if normal termination), or ABNORMAL (apply only if not normal termination). (ABNORMAL-UNWIND-PROTECT e u1 ... un) MACRO (isn't really related to the other two, but while we're on the subject of UNWIND-PROTECT....) is just like UNWIND-PROTECT but only evaluates U1 ... Un if the evaluation of E terminates abnormally. For debugging convenience QUEUE-UNWINDING-ACTION should also be legal at top level, but print a warning if thus used. (RESET-GLOBAL-UNWINDING-SCOPE) or some such thing would "exit" and re-enter the fictitious global scope. As an example, consider the following version of OPEN for use within an unwinding scope: (defun UW-OPEN (file &optional (options NIL) (when T)) (let ((file-object nil)) (abnormal-unwind-protect (progn (setq file-object (open file options)) (or (eq file-object tyi)(eq file-object tyo) (queue-unwinding-action when 'close file-object))) (and (filep file-object) ; in case quit out of UW-OPEN (not (eq file-object tyo)) (not (eq file-object tyi)) (close file-object))) file-object)) One possible area for comment: It might be useful to have named unwinding scopes so that you need not push your restoration actions on the innermost one. (Personally, I think that the system LOAD function should contain an unwinding scope to allow files to specify local changes without requiring the user to use some special LOAD function that the file is intended to be loaded with. For certain reasons, if this were done it would also be good to have a separate LOAD that did not establish a new scope.) Comments? Counterproposals and arguments about uselessness welcome.  Date: 08/18/80 02:02:01 From: MOON5 at MIT-AI To: lisp-forum at MIT-MC cc: EB at MIT-AI Re: unwinding-scope This is a ridiculous proposal. If you want to keep track of state you should keep it in variables. Nothing says you aren't allowed to have conditionals, mapcars of eval down a list, or whatever else you like in your unwind-protect cleanup forms.  Date: 18 August 1980 03:26-EDT From: Robert W. Kerns To: MOON at MIT-MC, EB at MIT-MC, LISP-FORUM at MIT-MC Re: UNWINDING-SCOPE EB raises at least one non-ridiculous point: LOAD should provide a 'UNWINDING-SCOPE' or equivalent. But I believe that EB's point is that he believes this is common enough to want a macro to make it easier to do and clearer that this is what is being done. If you don't agree, fine, but I don't think labeling it 'ridiculous' says anything about why you don't agree or allows anyone to make any judgment as to whether you are right, wrong, or biased. Note that the LISPM has several macros to do similar types of things. Consider LOGIN-SETQ and friends. Let me go on record as saying I don't think the name relates to the function at all, although I haven't got a better substitute. I also may actually agree with MOON (not that it's ridiculous, but that it may not be common enough to be worthwhile). I'd appreciate a bit more justification. I see I missed that you already thought they maybe should be named. If LOAD does one, they WILL need to be named...  Date: 18 August 1980 04:27-EDT From: Daniel L. Weinreb Re: In reply to EB: unwind-protect There may be something in what you say about LOADing. Other than that, I don't think these are worth adding to the standard system and the manual; the need is just too rare. You can do them yourself if you want them.  Date: 18 AUG 1980 1545-EDT From: EB at MIT-AI (Edward Barton) To: LISP-FORUM at MIT-AI cc: DLW at MIT-AI Re: UNWIND-PROTECT proposal (clarification) I don't necessarily propose that those things be part of "the Maclisp system," except perhaps as applied to LOAD. I propose something more on the order of a LIBLSP package.  Date: 18 AUG 1980 1603-EDT From: EB at MIT-AI (Edward Barton) To: LISP-FORUM at MIT-AI cc: MOON5 at MIT-AI Re: unwinding-scope MOON5@MIT-AI 08/18/80 02:02:01 Re: unwinding-scope This is a ridiculous proposal. If you want to keep track of state you should keep it in variables. Nothing says you aren't allowed to have conditionals, mapcars of eval down a list, or whatever else you like in your unwind-protect cleanup forms. Nothing says it isn't allowed, and neither did I since my proposal is built on just that sort of capability (though EVAL would be the wrong thing to map). What I proposed was a uniform convention for doing the things I described so that if somebody wants them he doesn't have to re-invent them. And so that he doesn't structure his code differently from the way he really wants just so that the existing IOTA works. And so that something general like the proposed LOAD can be written. "If you want to keep track of state you should keep it in variables." I'm not sure what is being proposed here. My proposed macros do use variables, as does most code. If you propose somehow putting things in variables rather than anywhere else, I don't see why structures, lists, properties, etc. are suddenly bad to use. If you propose always writing something like an IOTA form, I already explained that you may not know in advance what or how many things will want to be undone; review the examples. If you are merely making an unsupported statement (similar to the statement once made that if your reader macros have side effects then you're "clearly doing something wrong") then the statement, of course, stands as such. 'BABYL OPTIONS: Version:5 Append  1, recent,, Vector-Resize, *** EOOH *** Date: 13 November 1980 1238-EST (Thursday) From: Guy.Steele at CMU-10A To: Scott.Fahlman at CMU-10A cc: lisp-forum at MIT-MC Re: Snapped Elastic Actually, I didn't even have in mind all the hair of actual versus allocated sizes, only a primitive that could shrink a vector, but never grow it. What would you think of a medium-level primitive that would attempt to resize a vector, returning it is it succeeded and returning false if it failed? Then different implementations could have different criteria for whether it could resize a vector in place. User-level code would do: (DEFUN VECTOR-RESIZE (VEC NEWSIZE) ;may not return EQ pointer (OR (LOW-LEVEL-FAST--BUT-NOT-GUARANTEED-RESIZE VEC NEWSIZE) (DO ((V (MAKE-VECTOR NEWSIZE)) (N (MIN (VECTOR-LENGTH VEC) NEWSIZE)) (J 0 (+ J 1))) ((= J N) V) (VSET (VREF VEC J) V J)))) Then whatever cases are handled well by the low-level guts would be speedy and result in an EQ pointer, and otherwise the user code would have to deal with the copying. Some implementations would never allow resizing; some would always win, by using forwarding pointers; some would win when decreasing but not increasing; some would win for any variation below the initially allocated size; and so on. Date: 13 November 1980 1238-EST (Thursday) From: Guy.Steele at CMU-10A To: Scott.Fahlman at CMU-10A cc: lisp-forum at MIT-MC Re: Snapped Elastic Actually, I didn't even have in mind all the hair of actual versus allocated sizes, only a primitive that could shrink a vector, but never grow it. What would you think of a medium-level primitive that would attempt to resize a vector, returning it is it succeeded and returning false if it failed? Then different implementations could have different criteria for whether it could resize a vector in place. User-level code would do: (DEFUN VECTOR-RESIZE (VEC NEWSIZE) ;may not return EQ pointer (OR (LOW-LEVEL-FAST--BUT-NOT-GUARANTEED-RESIZE VEC NEWSIZE) (DO ((V (MAKE-VECTOR NEWSIZE)) (N (MIN (VECTOR-LENGTH VEC) NEWSIZE)) (J 0 (+ J 1))) ((= J N) V) (VSET (VREF VEC J) V J)))) Then whatever cases are handled well by the low-level guts would be speedy and result in an EQ pointer, and otherwise the user code would have to deal with the copying. Some implementations would never allow resizing; some would always win, by using forwarding pointers; some would win when decreasing but not increasing; some would win for any variation below the initially allocated size; and so on.