Date: 6 Sep 1983 1844-MDT From: Julian Padget Subject: Re: Function cells To: Moon@SCRC-TENEX, lisp-forum@MIT-MC cc: Padget@UTAH-20 In-Reply-To: Your message of 5-Sep-83 1304-MDT Since it was I who initiated this discussion, I feel it behoves me to reply to Dave Moon. For the most part the messages on the subject of function cells have been concise and factual. I do not wish to slander anybody but 'flaming' only became a problem when some people from MIT joined. The message length problem has been exacerbated by the popularity of inserting ones own comments into a message to construct a reply. An alternative is a few minutes with a pencil and paper, and serves to clarify the mind wonderfully. The subject matter seems entirely suited to lisp-forum; one cannot always expect everything in such a digest to be of interest to everyone, thus I find your reaction surprising and somewhat churlish. I hope you take these remarks in the spirit in which they are intended. --Julian Padget. -------  Date: 6 September 1983 04:28 EDT From: Kent M. Pitman Subject: Administrative Note To: LISP-FORUM @ MIT-MC We created LISP-FORUM several years ago as a bridge between the implementors of many dialects of Lisp who had formerly worked in isolation. Just as I frequently remind newcomers to the list, I suppose it's worth mentioning again to those who are already established on the list,... This list reaches a lot of busy people. It is not the place for long discussions which are not of general interest. It is also not the place for long, rambly arguments which have not been carefully thought out and carefully presented. If in doubt (and even when not), bounce your ideas off of other people before sharing them with the group. We've lost too many valuable contributors in the past due to what they felt was excessive flaming. It would be sad if more key people started to feel that way and withdraw from the group. Thanks. --kmp  Date: 5 September 1983 19:15 EDT From: George J. Carrette Subject: Zippy the mailer, or ... are we on the net yet? To: jkf @ UCBKIM cc: LISP-FORUM @ MIT-MC, Moon @ SCRC-TENEX In-reply-to: Msg of Mon 5 Sep 83 13:20:31 PDT from jkf%ucbkim at Berkeley (John Foderaro) Really now. In all probability Dave Moon is reading his mail on his own personal lisp machine which has about as much or more physical memory and speed as your usual DEC-20 or VAX-11/780 installation for 50 users, so it just doesn't make much sense in this case to be making snide comments about poor quality mail reading programs in the context of stifiling discussions on mailing lists. Given that bit of information, maybe you can just take Daves very polite comment at face value. -gjc  Received: from ucbkim.ARPA by ucbvax.ARPA (4.9/4.7) id AA12745; Mon, 5 Sep 83 13:21:08 PDT Received: by ucbkim.ARPA (4.6/4.2) id AA15598; Mon, 5 Sep 83 13:20:31 PDT Date: Mon, 5 Sep 83 13:20:31 PDT From: jkf%ucbkim@Berkeley (John Foderaro) Message-Id: <8309052020.AA15598@ucbkim.ARPA> To: Moon@SCRC Subject: Re: Function cells Cc: lisp-forum@MC In-Reply-To: Your message of Monday, 5 September 1983, 14:52-EDT While I am not particularly interested in the discussion on function cells, I think that lisp-forum is the precise place to have such a discussion for those who are interested. I've never understood why so many people insist on stifiling discussions on mailing lists. I can only conclude that there are a lot of crufty mail reading programs out there that force you to read every character of every letter you receive.  Received: from scrc-schuylkill by scrc-cupid with CHAOS; 5 Sep 1983 14:45:22-EDT Date: Monday, 5 September 1983, 14:52-EDT From: David A. Moon Subject: Function cells To: ZVONA at OZ, RWK at SCRC, BENSON at NIMBUS, Bobrow.PA at PARC-MAXC, Deutsch.PA at PARC-MAXC, PADGET at UTAH-20, Teitelman.PA at PARC-MAXC, TIM at OZ Cc: lisp-forum at MC Could the people who are interested in this "function cells" discussion please move it to another mailing list, where I don't have to see it? If other people on Lisp-forum feel that the discussion should continue on Lisp-forum, please send mail to me directly and if enough people so believe I will remove myself from lisp-forum.  Date: Mon, 5 Sep 1983 13:41 EDT Message-ID: <[MIT-OZ].ZVONA. 5-Sep-83 13:41:00> From: ZVONA@MIT-OZ To: Robert W. Kerns Cc: BENSON@SPA-Nimbus.ARPA, Bobrow.PA@PARC-MAXC, Deutsch.PA@PARC-MAXC, lisp-forum@MIT-MC, PADGET@UTAH-20, Teitelman.PA@PARC-MAXC, TIM%MIT-OZ@MIT-MC Subject: Function cells In-reply-to: Msg of 3 Sep 1983 21:06-EDT from Robert W. Kerns Well. There are probably people who can argue this better, but none have stepped forward, so... What I don't understand is why everybody goes on about the FUNCTION cell in favor of the VALUE cell. Why isn't anybody proposing we do away with the VALUE cell. The history of the VALUE cell in MacLisp *IS THE SAME AS THAT OF THE FUNCTION CELL*. Really, let's not go half-way. Let's take the argument for merging the VALUE and FUNCTION properties to its logical extreme, and eliminate the property-list altogether! I think that is a good idea. (This step was seriously discussed in the design of T.) The whole idea of symbols is overloaded and bankrupt. The different uses and features of symbols ought to be separated out. First, there is the idea of INTERNED STRINGS, a useful user-interface feature. So we'll have such a datatype. Then there is the separate idea of NAMES: any object can name another; to an object, you can associate a value and retrieve it efficiently. (T lets you associate a value with any S-expression.) Quite another idea is that of PROPERTIES; Brand-X and (I think) MIT-scheme allow any object to have properties associated with it; there is nothing special about ``symbols''. I never use properties, myself; I think that they are almost always an artifact of unclear thought, and that you'd do better to use something else. The main uses are in user interface, when a user uses a symbol to name a structure, in which case you'd do better to use a hashtable that maps symbols to their associated structure; and in storing random crufties, in which case you should probably be using DEFSTRUCT. Unfortunately, naming is currently all mixed up with EVAL, which is pretty much bankrupt itself. The reason that currently symbols have values and not other things is that the main thing that gets values is EVAL, and other things mean something special to EVAL. So lists, for example, will not have their values used much, and from what I can see, T makes no use of the extended variable feature. Brian Smith's thesis straightens this particular mess out by replacing EVAL with two things: NORMALIZE, and DOWN-ARROW. DOWN-ARROW gets the ``value'' of something: the thing that is designated by its argument. NORMALIZE simplifies expressions. If you want to understand what QUOTE is really for, and how naming should work, you should read his thesis (it's an LCS TR). But his 2-LISP is probably too outre' to discuss on this mailing list. ... You can view (FUNCTION ...) as meaning "do what you do to the CAR of a combination to get the function to be performed". It doesn't buy you much to say that what you do with the FIRST of a form should be the same as what you do the the SECOND. It just isn't true after you get your hands on the function, especially in the case of macros. Your point here (which must be the crux of your argument) is not quite clear to me. Obviously the car of all forms can not be treated the same way as the other subforms; but in the case of special forms such as SETQ, various non-car subforms are treated in different ways anyway. The issue is meaningful only for forms that are simple combinations, i.e. function applications. [FUNCTION of a symbol that refers to a macro returns something completely random on the lisp machine, anyway. What does your last sentence mean?] The fact is that even for simple combinations, the CAR is treated specially, since it is the one that is applied to all the others. The claim is not that the uniformity of handling at this level is especially winning. The uniformity really arises from the fact that there is only one thing going on here, and that is NAMING. And the names of functions are no different from the names of anything else. I have seen many users of lisps without function cells SCREW THEMSELVES TO THE WALL by using something as a temporary variable that happens to also be a function name. LIST is the most common offender. (This is made worse by lisps which defaultly use dynamic scoping, but people also have screwed themselves doing (SETQ LIST (GET-ITEM-LIST 'FOO)) at top level). There are lots of ways to screw yourself when you are first learning Lisp. (Try tracing PRINT.) In a language that is lexically scoped and in which the system functions are locked and sacred (as, I believe, they are in MIT scheme) the danger is much reduced, and not worth making the language counter-intuitive over. because it is intuitively clearer if the defining and using forms have the same syntax. Moreover, this eliminates all the overhead of function specs. And then defun again has a clear macroexpansion: it is really just sugar for setf! Bullshit. DEFUN started out life as sugar for DEFPROP, not PUTPROP. The difference is that the compiler was then considered free to compile things defined with DEFPROP. In more modern times, the compiler is considered free to compile things "quoted" with FUNCTION rather than QUOTE. So... In an ideal world (which is not even particularly hard to achieve -- the MIT scheme implementation, at least, succeeds) there should be no difference between code to be compiled and code to be interpreted (except for declarations that will allow the compiler to optimize code without changing the semantics). It should never be necessary for the user to reason about differences between the behaviors of his program compiled and interpreted. Any sort of ``compilable contexts'' kludge such as #', special handling of DEFPROP, and (PROGN 'COMPILE ...) makes it much less clear what the compiler is doing, and harder to think about how to write code. You can't argue that because ((:property snabozzle quibbix) . args) doesn't work that the idea of a function spec is inelegant! At most you can say that a piece of it that should work doesn't. Certainly it would be an incremental improvement if this worked. In the current scheme, (DEFUN (:PROPERTY FOO :FUN-PROP) (X) (BAR X)) ==> (SETF #'(:PROPERTY FOO :FUN-PROP) #'(LAMBDA (X) (BAR X))) This doesn't work currently on the lisp machine. Presumably it does in the incomplete Common Lisp function spec proposal. (defun (symbol-value symbol) ...) ARGH! This does nobody any favors. It is every bit as bad as you imagine distinguishing between functions and values is. OK, fair enough, this sucked. Most of the people involved with Common Lisp have faced this issue and been thinking about it for 5 years or more. So have the Scheme people; so have I, for that matter. [Disclaimer: I do not work on any brand of Scheme, and am quite skeptical of all existing implementations. I use lisp machines, which are clearly a vastly superior programming system, even though the Lisp they are based on is a little crufty.] Most of the arguments against [function cells] are based on an unstated assumption that saying (FUNCALL X ...) is an unwanted complexity rather than a helpful clarification, or that the slight simplification of eliminating the function cell is worth sacraficing the clarity, or that the slight simplification of calling EVAL on the CAR of the form is worth anything at all. These assumptions aren't really unstated; they are firm convictions of many people. Some people that have programmed in languages both with and without funtion cells find those without to be much clearer; no sacrifice is involved.  Received: from SCRC-YUKON by SCRC-TENEX with CHAOS; Sat 3-Sep-83 21:12:55-EDT Date: Saturday, 3 September 1983, 21:06-EDT From: Robert W. Kerns Subject: Function cells To: ZVONA at OZ, Bobrow.PA at PARC-MAXC Cc: BENSON at SPA-Nimbus.ARPA, Deutsch.PA at PARC-MAXC, lisp-forum at MC, PADGET at UTAH-20, Teitelman.PA at PARC-MAXC, TIM%MIT-OZ at MC In-reply-to: <[MIT-OZ].ZVONA. 3-Sep-83 14:38:45> Date: Sat, 3 Sep 1983 14:38 EDT From: ZVONA@MIT-OZ The "function spec" problem in Common Lisp is one that makes clear the advantages of treating "functions and values" uniformly. It is common to store functions as properties of symbols. This allows a clean implementation of data-directed dispatching: you write (funcall (get snabozzle 'quibbix) . args) in order to do a quibbix-type dispatch on the snabozzle. To support this, you want a clean way to write the functions that will be stored on the property list and to get them into the property list. In a lambda calculus system, you might write (put 'some-snabozzle 'quibbix (lambda ...)) but modern lisps have defun, which has a lot of useful syntax that you'd like to use in defining the dispatched-to functions. Maclisp provided the syntax (defun (some-snabozzle quibbix) arglist . body) which would effectively macroexpand into the same thing (and also hack defun-syntax and process declarations and so forth for you). This syntax had a number of problems; among them, especially, that besides the "function cell" and properties there are other places that you might want to store functions. Therefore, lisp machine lisp did an almost-upward-compatible extension to this in which the defun name (first subform) if a list would dispatch on the car of the list if possible. The car of the list could then be a keyword that would tell where to put the function. (defun (:property some-snabozzle quibbix) arglist . body) for example. These lists (such as (:property ...)) were called "function specs". The lisp machine provides for each function spec a set of things that can be done to it: you can define it, undefine it (make it unbound), get its definition, and so on. All this amounts to a fair amount of conceptual overhead (though to total amount of code to implement it fits in a few pages). It is no longer obvious just what it is that defun abstractly macroexpands into; it is doing some magic with the function specs behind your back. Moreover, it is often unclear in what contexts it is allowable to use a function spec. For example, beginners who are first introduced to function specs often try to write ((:property snabozzle quibbix) . args) or (funcall (:property snabozzle quibbix) . args) This suggests that perhaps what one ought to be writing is (defun (get 'some-snabozzle 'quibbix) arglist . body) No, one should write (funcall #'(:property snaboozle quibbix) . args) just like one should write (funcall #'snaboozle . args) or (funcall #'(lambda (x) (+ x x)) . args). The non-uniformity here is that (funcall 'snaboozle . args) or (funcall '(lambda (x) (+ x x)) . args) will work at all. The only problem with writing ((:property snabozzle quibbix) . args) is that it is visually confusing. You can't argue that because ((:property snabozzle quibbix) . args) doesn't work that the idea of a function spec is inelegant! At most you can say that a piece of it that should work doesn't. because it is intuitively clearer if the defining and using forms have the same syntax. Moreover, this eliminates all the overhead of function specs. And then defun again has a clear macroexpansion: it is really just sugar for setf! Bullshit. DEFUN started out life as sugar for DEFPROP, not PUTPROP. The difference is that the compiler was then considered free to compile things defined with DEFPROP. In more modern times, the compiler is considered free to compile things "quoted" with FUNCTION rather than QUOTE. So... That is, the defun above is really (setf (get 'some-snabozzle 'quibbix) (lambda arglist . body)) There is just one difficulty with this: what does (defun drofnats (self) (capitalize (string-reverse self))) macroexpand into? It ought to be (setf drofnats (lambda (self) (capitalize (string-reverse self)))) but that sets the value cell, not the function cell! In the current scheme, (DEFUN FOO (X) (BAR X)) ==> (SETF #'FOO #'(LAMBDA (X) (BAR X))) (DEFUN (:PROPERTY FOO :FUN-PROP) (X) (BAR X)) ==> (SETF #'(:PROPERTY FOO :FUN-PROP) #'(LAMBDA (X) (BAR X))) This, finally, is the point: The elegant setf syntax for defun was abandoned for common lisp because it didn't "work" on symbols. The true story is that the uniform handling of functions as values is a self-consistent system, and function cells are a confused mess. That is indeed the problem with your scheme, it turns function cells into a confused mess. It also ignores every shred of consistancy in favor of your own internal confusion. Proof by Solipsism is not acceptable. *** *** *** *** It is worth mentioning that (function ...) is intimately tied up with the function cell lossage. Common lisp weirdly chose to implement lambda right, but to require a no-op #' in front of it. (Anyone who wants to win, of course, can define lambda as a macro that expands into #'(lambda ...). Takes all kinds.) You're right, it is connected. You can view (FUNCTION ...) as meaning "do what you do to the CAR of a combination to get the function to be performed". It doesn't buy you much to say that what you do with the FIRST of a form should be the same as what you do the the SECOND. It just isn't true after you get your hands on the function, especially in the case of macros. The distinction here is very much like the distinction between nouns and verbs in English. Frequently a word is both a noun and a verb, but the distinction is clear from context. For example, LIST is both a noun and a verb. I have seen many users of lisps without function cells SCREW THEMSELVES TO THE WALL by using something as a temporary variable that happens to also be a function name. LIST is the most common offender. (This is made worse by lisps which defaultly use dynamic scoping, but people also have screwed themselves doing (SETQ LIST (GET-ITEM-LIST 'FOO)) at top level). *** *** *** *** Common lisp people: defun setf syntax seems like such a win that perhaps it could be salvaged by the following klduge (which seems better than function specs): defun has setf-syntax except on symbols. It uses the function cell with symbols. You can set the value cell of a symbol with (defun (symbol-value symbol) ...) ARGH! This does nobody any favors. It is every bit as bad as you imagine distinguishing between functions and values is. This debate really annoys me, partly because not a single new thing has been said (with the exception of your SETF/DEFUN scheme, which I detest). Most of the people involved with Common Lisp have faced this issue and been thinking about it for 5 years or more. Most of the arguments against it are based on an unstated assumption that saying (FUNCALL X ...) is an unwanted complexity rather than a helpful clarification, or that the slight simplification of eliminating the function cell is worth sacraficing the clarity, or that the slight simplification of calling EVAL on the CAR of the form is worth anything at all. What I don't understand is why everybody goes on about the FUNCTION cell in favor of the VALUE cell. Why isn't anybody proposing we do away with the VALUE cell. The history of the VALUE cell in MacLisp *IS THE SAME AS THAT OF THE FUNCTION CELL*. I.e. they both used to simply be properties on the property list. The evaluator would do a (GET SYMBOL 'VALUE), just like it would do a (GET SYMBOL 'EXPR). It is obvious that these are both properties of a symbol, just like (GET SYMBOL 'SI:FLAVOR) is. Really, let's not go half-way. Let's take the argument for merging the VALUE and FUNCTION properties to its logical extreme, and eliminate the property-list altogether! Personally, I'd rather keep distinct meanings separate (noun-meaning (value), verb-meaning (function), adjective-meaning (flavor)), and try to have the most EXPRESSIVE language, not the SIMPLEST language.  Date: Sat, 3 Sep 1983 14:38 EDT Message-ID: <[MIT-OZ].ZVONA. 3-Sep-83 14:38:45> From: ZVONA@MIT-OZ To: Bobrow.PA@PARC-MAXC.ARPA Cc: Eric Benson , Deutsch.PA@PARC-MAXC.ARPA, lisp-forum@MIT-MC.ARPA, Julian Padget , Teitelman.PA@PARC-MAXC.ARPA, TIM%MIT-OZ@MIT-MC.ARPA Subject: Function cells In-reply-to: Msg of 1 Sep 1983 21:11-EDT from Bobrow.PA at PARC-MAXC.ARPA The "function spec" problem in Common Lisp is one that makes clear the advantages of treating "functions and values" uniformly. It is common to store functions as properties of symbols. This allows a clean implementation of data-directed dispatching: you write (funcall (get snabozzle 'quibbix) . args) in order to do a quibbix-type dispatch on the snabozzle. To support this, you want a clean way to write the functions that will be stored on the property list and to get them into the property list. In a lambda calculus system, you might write (put 'some-snabozzle 'quibbix (lambda ...)) but modern lisps have defun, which has a lot of useful syntax that you'd like to use in defining the dispatched-to functions. Maclisp provided the syntax (defun (some-snabozzle quibbix) arglist . body) which would effectively macroexpand into the same thing (and also hack defun-syntax and process declarations and so forth for you). This syntax had a number of problems; among them, especially, that besides the "function cell" and properties there are other places that you might want to store functions. Therefore, lisp machine lisp did an almost-upward-compatible extension to this in which the defun name (first subform) if a list would dispatch on the car of the list if possible. The car of the list could then be a keyword that would tell where to put the function. (defun (:property some-snabozzle quibbix) arglist . body) for example. These lists (such as (:property ...)) were called "function specs". The lisp machine provides for each function spec a set of things that can be done to it: you can define it, undefine it (make it unbound), get its definition, and so on. All this amounts to a fair amount of conceptual overhead (though to total amount of code to implement it fits in a few pages). It is no longer obvious just what it is that defun abstractly macroexpands into; it is doing some magic with the function specs behind your back. Moreover, it is often unclear in what contexts it is allowable to use a function spec. For example, beginners who are first introduced to function specs often try to write ((:property snabozzle quibbix) . args) or (funcall (:property snabozzle quibbix) . args) This suggests that perhaps what one ought to be writing is (defun (get 'some-snabozzle 'quibbix) arglist . body) because it is intuitively clearer if the defining and using forms have the same syntax. Moreover, this eliminates all the overhead of function specs. And then defun again has a clear macroexpansion: it is really just sugar for setf! That is, the defun above is really (setf (get 'some-snabozzle 'quibbix) (lambda arglist . body)) There is just one difficulty with this: what does (defun drofnats (self) (capitalize (string-reverse self))) macroexpand into? It ought to be (setf drofnats (lambda (self) (capitalize (string-reverse self)))) but that sets the value cell, not the function cell! This, finally, is the point: The elegant setf syntax for defun was abandoned for common lisp because it didn't "work" on symbols. The true story is that the uniform handling of functions as values is a self-consistent system, and function cells are a confused mess. *** *** *** *** It is worth mentioning that (function ...) is intimately tied up with the function cell lossage. Common lisp weirdly chose to implement lambda right, but to require a no-op #' in front of it. (Anyone who wants to win, of course, can define lambda as a macro that expands into #'(lambda ...). Takes all kinds.) *** *** *** *** Common lisp people: defun setf syntax seems like such a win that perhaps it could be salvaged by the following klduge (which seems better than function specs): defun has setf-syntax except on symbols. It uses the function cell with symbols. You can set the value cell of a symbol with (defun (symbol-value symbol) ...)  Date: Thu, 1 Sep 83 18:11 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: Re: Function cells In-reply-to: "BENSON@SPA-Nimbus.ARPA's message of Thu, 1 Sep 83 15:35 PDT" To: Eric Benson cc: Bobrow.PA@PARC-MAXC.ARPA, TIM%MIT-OZ@MIT-MC.ARPA, Deutsch.PA@PARC-MAXC.ARPA, lisp-forum@MIT-MC.ARPA, Julian Padget , Teitelman.PA@PARC-MAXC.ARPA I don't object to definitions which depend on context. I like to see uses which depend on such context dependent definitions distinguished from ones which don't.  Date: 1 Sep 1983 1746-MDT From: Julian Padget Subject: Re: Function cells To: BOBROW.PA@PARC-MAXC cc: Padget@UTAH-20, DEUTSCH.PA@PARC-MAXC, LISP-FORUM@MIT-MC, TEITELMAN.PA@PARC-MAXC, TIM%MIT-OZ@MIT-MC In-Reply-To: Your message of 1-Sep-83 1551-MDT As an aside to the main discussion, I do not condone the use of both the function and the value cell as a programming style, I am also doubtful about Tim's appellation of this as a feature. I should also remark that there are some 'real' LISPs that only provide one location for keeping a value associated with an id (ie no function cell), in particular Cambridge LISP. I would vote with Tim to remove this dichotomy which is being continued in Common LISP, for reasons which are given in more detail below. However I strongly suspect that the main reason for its retention in CL is not for aesthetic but political motives - how else is MACSYMA going to run under CL, the coding style therein uses this 'feature' quite a lot, and rewriting all the affected parts is not an overnight job. I maintain that a function IS a value, and the distinction between value and function is erroneous. Although no-one would seriously regard LISP as a faithful implementation of the semantics of lambda calculus, it is not unreasonable to pay lip service to its (our??) heritage. With respect to the question of clarity: there is a problem with the handling of anonymous functions (as in say the MAP functions), and in the case of high order functions. Although this example may seem somewhat contrived, it is relevant. R-Sum and I-sum define functions to do a curried summation (I provide both for a little variety): (De R-Sum (a0) (Function (Lambda (ai) (Cond ((Zerop ai) a0) (t (R-Sum (Plus a0 ai))))))) alternatively: (De I-Sum (a0) (Prog (fn) (Return (SetQ fn (Function (Lambda (ai) (Cond ((Zerop ai) a0) (t (SetQ a0 (Plus a0 ai)))) fn)))))) now using these in a single cell system gives rise to the following expression: (((((I-sum 1) 2) 3) 4) 0) if FUNCALL must be used this becomes: (FunCall (FunCall (FunCall (FunCall (I-sum 1) 2) 3) 4) 0) In addition to being a matter of style/personal taste, it is a question of consistency (as Peter Deutsch remarked), each element of a form is treated the same way - even assuming the first element is evaluated repeatedly until an object which can be applied is found, since that is simply a recursion in eval without passing the rest of the form downwards. --Julian Padget. -------  Date: Thu, 1 Sep 83 16:19 PDT From: Deutsch.PA@PARC-MAXC.ARPA Subject: Re: Function cells In-reply-to: "Bobrow's message of Thu, 1 Sep 83 14:30 PDT" To: Bobrow.PA@PARC-MAXC.ARPA cc: TIM%MIT-OZ@MIT-MC.ARPA, lisp-forum@MIT-MC.ARPA, Julian Padget , Teitelman.PA@PARC-MAXC.ARPA Danny, I have to side with the Common Lisp / T people on this one. Just because most (but not all) function invocations use names that are bound in a global, flat name space (which all modern Lisp systems are finding ways to enrich), and most (but not all) variables are bound more locally (in this lexical scope? in a dynamically enclosing but far-from-apparent scope? in an enclosing lexical scope?), is not enough of a reason for introducing a mechanism that adds complexity all over the system. T takes the viewpoint that all identifiers are on a par. The compiler is able to take advantage of pragmatic information about things being constant or not dynamically rebound, regardless of whether they are functions or variables. I imagine Common Lisp is the same. Function cells seemed like a good idea at the time, just like GLOBALVARS. I think they were both pragmatic successes and semantic mistakes.  Received: from SPA-Nimbus by SPA-Nimbus with CHAOS; Thu 1-Sep-83 15:36:05-PDT Date: Thursday, 1 September 1983, 15:35-PDT From: Eric Benson Subject: Re: Function cells To: Bobrow.PA at PARC-MAXC.ARPA, TIM%MIT-OZ at MIT-MC.ARPA Cc: Deutsch.PA at PARC-MAXC.ARPA, lisp-forum at MIT-MC.ARPA, Julian Padget , Teitelman.PA at PARC-MAXC.ARPA In-reply-to: The message of 1 Sep 83 14:30-PDT from Bobrow.PA at PARC-MAXC.ARPA Date: Thu, 1 Sep 83 14:30 PDT From: Bobrow.PA@PARC-MAXC.ARPA Tim, There is a difference in meaning between: (fun x y) and (FUNCALL fun x y). In the former case, one expects the meaning of fun to be independent of the context of the call. In the latter, to be dependent of parameters passed in to the environment. One can take the point of view that one doesn't want to distinguish these cases, but I maintain that the code is clearer when you do make the distinction for the call. Similarly, I have seen proposals for doing object oriented programming by having the function name evaluated in the context of the first argument to the function (which would in fact be a closure on some functions and variables). I object to that as well. So for me the question is not about efficiency and limitation of name space, but of what distinctions you want to make apparent at a call. danny bobrow By this reasoning you must object to the Common Lisp special forms LABELS and FLET, which define lexically scoped functions. With the inclusion of these, function name evaluation is no longer independent of context. It is only dependent on the static context, however; there is no dynamic binding of function names in Common Lisp. Is it only dependence on dynamic context you object to, or dependence on any context?  Date: Thu, 1 Sep 83 14:30 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: Re: Function cells In-reply-to: <[MIT-OZ].TIM. 1-Sep-83 07:37:32> To: TIM%MIT-OZ@MIT-MC.ARPA cc: Bobrow.PA@PARC-MAXC.ARPA, Deutsch.PA@PARC-MAXC.ARPA, lisp-forum@MIT-MC.ARPA, Julian Padget , Teitelman.PA@PARC-MAXC.ARPA Tim, There is a difference in meaning between: (fun x y) and (FUNCALL fun x y). In the former case, one expects the meaning of fun to be independent of the context of the call. In the latter, to be dependent of parameters passed in to the environment. One can take the point of view that one doesn't want to distinguish these cases, but I maintain that the code is clearer when you do make the distinction for the call. Similarly, I have seen proposals for doing object oriented programming by having the function name evaluated in the context of the first argument to the function (which would in fact be a closure on some functions and variables). I object to that as well. So for me the question is not about efficiency and limitation of name space, but of what distinctions you want to make apparent at a call. danny bobrow  Date: Thu, 1 Sep 1983 07:37 EDT Message-ID: <[MIT-OZ].TIM. 1-Sep-83 07:37:32> From: TIM@MIT-OZ To: Bobrow.PA@PARC-MAXC.ARPA Cc: Deutsch.PA@PARC-MAXC.ARPA, lisp-forum@MIT-MC.ARPA, Julian Padget , Teitelman.PA@PARC-MAXC.ARPA Subject: Function cells In-reply-to: Msg of 30 Aug 1983 16:54-EDT from Bobrow.PA at PARC-MAXC.ARPA For the "usual case" it is perfectly reasonable to have separate function and value cells--I make use of that feature all the time. But the situation changes when you start to pass functions around in value cells. The awkward convention of specifying a function call by (FUNCALL fun arg1 arg2 ...) ;where the function is found in the ;value cell of FUN rather than (fun arg1 arg2 ...) ;in the usual case is perpetuated by having both a function cell and a value cell. In languages which do not adopt this convention (such as Gerry Sussman and Guy Steele's Scheme) it is possible to use the latter form in every case, even when FUN has been passed as an argument. Thus (define (foo fn a b) ;(The slight difference in the syntax (fn (* 2 a) b)) ;of FOO's function spec is incidental) replaces (defun foo (fn a b) (funcall fn (* 2 a) b)) I realize that the deep binding scheme in early Lisp implementations made it necessary to have a separate mechanism for fast function lookup, and that the namespace limitation of dynamic scoping makes having a function context and a variable context handy, but I do not think that the function/value dichotomy should continue into the lexically scoped Common Lisp. In the case where a variable contains a function (in its value cell) I see little semantic distinction between the lookup of variables and functions. In this respect, I fail to see your point about "separating out mechanisms." I see it justifiable only for efficiency reasons and for coping with a limited namespace. Tim McNerney  Date: 30 August 1983 23:58 EDT From: George J. Carrette Subject: Function cells. To: LISP-FORUM @ MIT-MC I would credit JONL with introducing what is effectively the function-cell functionality to Maclisp. Even though the internals of the function calling mechanism in maclisp uses various properties, EXPR, FEXPR, LSUBR, SUBR, FSUBR, and also various function cells with the UUOLINK mechanism, the user only need be aware of the existence of DEFUN. Here is some old lisp mail on the topic: 3/1/69 JONL THE CURRENT VERSION OF LISP, "LISP 102", HAS THE FOLLOWING AS-YET UNDOCUMENTED FEATURES: 1)"DEFUN" IS AN FSUBR USED TO DEFINE FUNCTIONS. EXAMPLES ARE (DEFUN ONECONS (X) (CONS 1 X)) WHICH IS EQUIVALENT TO (DEFPROP ONECONS (LAMBDA (X) (CONS 1 X) EXPR) THE NOVEL FEATURE OF "DEFUN" IS THAT ONE NEED NOT BE SO CONCERNED WITH BALANCING PARENTHESES AT THE VERY END OF THE FUNCTION DEFINITION. ALSO, THE "LAMBDA" NEED NOT BE DIRECTLY INSERTED. Of course, the time this note was written none of the hairier specialized calling mechanisms had been added to maclisp. -gjc  Date: Tue, 30 Aug 83 13:54 PDT From: Bobrow.PA@PARC-MAXC.ARPA Subject: Re: function cells In-reply-to: "Deutsch's message of Tue, 23 Aug 83 14:13 PDT" To: Julian Padget cc: Deutsch.PA@PARC-MAXC.ARPA, lisp-forum@MIT-MC.ARPA, Bobrow.PA@PARC-MAXC.ARPA, Teitelman.PA@PARC-MAXC.ARPA I must admit that it was my idea of having separate function cells, value cells, pname cells, and property lists for Lisp atoms in BBN-Lisp. I am a believer in separating out mechanisms when there are distinct differences in function. I think lookup of functions is semantically different than lookup of variables. The time scale for which bindings are applicable for variables and fn definitions are applicable are quite different different in the usual case.  Date: Tue, 23 Aug 83 14:13 PDT From: Deutsch.PA@PARC-MAXC.ARPA Subject: Re: function cells In-reply-to: "PADGET@UTAH-20.ARPA's message of 23 Aug 83 11:35 MDT" To: Julian Padget cc: lisp-forum@MIT-MC.ARPA, Bobrow.PA@PARC-MAXC.ARPA, Teitelman.PA@PARC-MAXC.ARPA Lisp 1.5 stored function definitions on the property list. To my knowledge, the first Lisp that used an independent function cell was the PDP-1 implementation of BBN-Lisp, the precursor of Interlisp. Dan Bobrow may be able to confirm or add details. This would have been around 1965. He and I were the principal architects of this system; I would trust his word about who invented what. You may find other interesting historical information in the Information International Inc. Lisp system book, edited by Edmund C. Berkeley and also published sometime around 1965. It doesn't describe BBN-Lisp, but it describes some other systems, and one of them may have function cells -- I don't really remember. I don't know what alternatives you want a justification with respect to. The changeover from property list representation was primarily for efficiency. The distinction between "function context" and "variable context" for interpreting names was a bad idea inherited from Lisp 1.5: it was retained for a very long time because doing free variable lookups for every function name (which are normally only bound at the top level) would have been atrociously slow. It was only with the invention of shallow binding that people started to look seriously at re-merging function and variable binding.  Date: 23 Aug 1983 1135-MDT From: Julian Padget Subject: function cells To: lisp-forum@MIT-MC cc: Padget@UTAH-20 I am doing a little bit of historical digging on LISP, and I would like to know who claims responsibility for the invention of the function cell which is common to so many LISP implementations. Further I would be interested to hear a justification for both its invention and retention. --Julian Padget (Padget@Utah-20). -------  Redistributed-Date: 14 June 1983 22:27 mst Redistributed-By: VaughanW.REFLECS at HI-MULTICS Redistributed-To: info-ada at MIT-MC, editor-people at SU-SCORE, lisp-forum at MIT-MC Date: 27 May 1983 19:08 mst From: VaughanW at HI-MULTICS (Bill Vaughan) Subject: Call For Papers To: HUMAN-NETS at RUTGERS cc: sf-lovers at MIT-AI, info-micro at MIT-MC Last year at this time I put the Call for Papers for the PC3 conference out to these mailing lists and bulletin boards. We seemed to get a good response, so here it is again. Notice that this year's theme is a little different. Further note that we are formally refereeing papers this year. If anyone out there is interested in refereeing, please send me a note. --------------- Third annual Phoenix Conference on Computers and Communications CALL FOR PAPERS Theme: THE CHALLENGE OF CHANGE - Applying Evolving Technology. The conference seeks to attract quality papers with emphasis on the following areas: APPLICATIONS -- Office automation; Personal Computers; Distributed systems; Local/Wide Area Networks; Robotics, CAD/CAM; Knowledge-based systems; unusual applications. TECHNOLOGY -- New architectures; 5th generation & LISP machines; New microprocessor hardware; Software engineering; Cellular mobile radio; Integrated speech/data networks; Voice data systems; ICs and devices. QUALITY -- Reliability/Availiability/Serviceability; Human engineering; Performance measurement; Design methodologies; Testing/validation/proof techniques. Authors of papers (3000-5000 words) or short papers (1000-1500 words) are to submit abstracts (300 words max.) with authors' names, addresses, and telephone numbers. Proposals for panels or special sessions are to contain sufficient detail to explain the presentation. 5 copies of the completed paper must be submitted, with authors' names and affiliations on a separate sheet of paper, in order to provide for blind refereeing. Abstracts and proposals due: August 1 Full papers due: September 15 Notification of Acceptance: November 15 Conference Dates: March 19-21, 1984 Address the abstract and all other replies to: Susan C. Brewer Honeywell LCPD, MS Z22 PO Box 8000 N Phoenix AZ 85066 ---------------- Or you can send stuff to me, Bill Vaughan (VaughanW @ HI-Multics) and I will make sure Susan gets it.  Date: Wednesday, 11 May 1983, 03:25-EDT From: Matt BenDaniel Subject: LOOP CONTINUE STATEMENT To: MOON@SCRC-TENEX, SMATT@MIT-OZ Cc: bug-LISPM@MIT-OZ, dove@MIT-DSPG, LISP-FORUM@MIT-OZ In-reply-to: The message of 11 May 83 02:43-EDT from MOON at SCRC-TENEX A CONTINUE statement should mean skipping executing any code lexically after it on the current iteration. This could, of course, be a problem in the following: . . . (loop for x = 0 then (1+ x) IF (> x 3) CONTINUE until (> x 7)) However, this is the problem of the coder. If there are other reasons why implementing (or specifying) the function of a CONTINUE statement, how about constraining the location of an IF-CONTINUE sequence in a LOOP body in a manner similar to the IF-DO sequence, where iteration is not allowed to follow body code. Also, what about a CONTINUE-NAMED feature for NAMED loops?  Date: Wednesday, 11 May 1983 02:43-EDT From: MOON at SCRC-TENEX To: Matt BenDaniel Cc: bug-LISPM at MIT-OZ, dove at MIT-DSPG, LISP-FORUM at MIT-OZ Subject: LOOP CONTINUE STATEMENT In-reply-to: The message of 11 May 1983 01:58-EDT from Matt BenDaniel Date: Wednesday, 11 May 1983, 01:58-EDT From: Matt BenDaniel I'd also be very interested in hearing answers to the following question: Date: Thursday, 14 April 1983, 10:14-EST From: Webster Dove Is there a way in (loop ...) to say "go directly to the next iteration. Do not execute the remaining clauses of the body" Such statements typically are called "continue" or "next" I have encountered many situations where such a statement would be useful. Date: Thursday, 14 April 1983 17:27-EST From: MOON at SCRC-TENEX In-reply-to: The message of 14 Apr 1983 10:14-EST from Webster Dove There isn't now. Normally one encloses the body in a conditional (unfortunately, it can be painful to do this currently if the body includes COLLECT statements). The main problem with having a continue statement is that it may be unclear just what is regarded as "the body" and what is regarded as "the iteration framework": If there is a WHILE statement later in the LOOP than the CONTINUE, should it be skipped or should it still be executed? And is the answer to this affected by whether there is a DO after the WHILE?  Date: Wednesday, 11 May 1983, 01:58-EDT From: Matt BenDaniel Subject: LOOP CONTINUE STATEMENT To: bug-LISPM@MIT-OZ, LISP-FORUM@MIT-OZ, dove@MIT-DSPG In-reply-to: The message of 14 Apr 83 10:14-EST from Webster Dove I'd also be very interested in hearing answers to the following question: Return-path: Date: Thursday, 14 April 1983, 10:14-EST From: Webster Dove To: info-LISPM at MIT-OZ Is there a way in (loop ...) to say "go directly to the next iteration. Do not execute the remaining clauses of the body" Such statements typically are called "continue" or "next" I have encountered many situations where such a statement would be useful.  Date: 10-Feb-83 14:11:16-PST (Thu) From: UCBKIM.jkf@Berkeley (John Foderaro) Subject: Re: UNLESS -- or something like it. Message-Id: <8301102211.3519@UCBKIM.BERKELEY.ARPA> Received: by UCBKIM.BERKELEY.ARPA (3.256 [12/5/82]) id AA03519; 10-Feb-83 14:11:16-PST (Thu) Received: from UCBKIM.BERKELEY.ARPA by UCBVAX.BERKELEY.ARPA (3.300 [1/17/83]) id AA06432; 10 Feb 83 14:10:07 PST (Thu) To: @Mit-mc:RpK@MIT-OZ, @mit-mc:Lisp-Forum@MIT-OZ In-Reply-To: Your message of Thursday, 10 February 1983, 16:07-EST Franz Lisp has an 'if' macro which does what you want. For example: (if val thenret else (do-it)) ==> (cond (val) (t (do-it))) There are other form too, such as: (if a then b elseif c thenret else d) is (cond ((a b)) (c) (t d)) I've always thought that the 'if' macro without keywords is far worse visually than the cond it expands into. For compatiblity, the Franz if macro does handle the non-keyword cases (with the obvious exceptions).  Date: Thursday, 10 February 1983, 16:07-EST From: Robert P. Krajewski Subject: UNLESS -- or something like it. To: Lisp-Forum at MIT-OZ I find myself writing expressions of this form often: (IF VAL VAL (DO-IT)) ; VAL evaluated once, of course This is almost like UNLESS (the Common Lisp one-armed conditional), except UNLESS returns () if VAL is non-(), so it's useless as an EXPRESSION (UNLESS is still pretty good for side-effecting code, though). The above code fragment can also be written as (OR VAL (DO-IT)) ; VAL CAN only evaluate once anyway. Is there some macro FOO that does this ? (DEFMACRO FOO (VAL &BODY ELSE) ; not the real definition, it (OR ,VAL (PROGN ,@BODY)) ; should be smart about (FOO (INCF X) -1) ; or the like It seems like a fairly useful construction. Anybody seen a form (macro) that does this ? ``Bob''  Date: 22 April 1982 14:31-EST From: George J. Carrette Subject: &mumbles at all levels. To: ALAN at MIT-MC cc: BUG-LISPM at MIT-MC, LISP-FORUM at MIT-MC, BUG-LISP at MIT-MC, DULCEY at MIT-ML Go for it! Indeed, for the destructuring implementation for NIL I implemented &mumbles at all levels, and it was easier, cleaner, and produced considerably less code per DEFMACRO than using the other methods. For example, the following defmacro: (defmacro g ((a b (c d)) &optional e) (foo a b c d e)) compiles into 42 pdp-10 instructions using the technology presently provided in Maclisp, but only 7 pdp-10 instructions using the technology used in NIL (inside the compilation environment in pdp-10 maclisp which *was* used to compile the cross compiler). -gjc  Date: 21 Apr 1982 21:57 PST From: JonL at PARC-MAXC Subject: Re: &mumbles at all levels -- BIND-ARGS In-reply-to: ALAN's message of 20 April 1982 21:27-EST To: Alan Bawden cc: Guy.Steele@CMUA,LISP-FORUM@MIT-MC Wasn't there a lot of discussion about 6 months to a year ago about having LET accept &mumbles "at lots of levels"? When you say "at all levels" do you mean that BIND-ARGS will also destructure? If progress on LET is going to be forever bogged down in the red tape, then how about doing BIND-ARGUMENTS at least one of the two ways proposed for destructuring (and since you are using the &optional words, you have more-or-less selected the data-pattern format rather than the SETF format).  Date: Wednesday, 21 April 1982, 02:40-EST From: David L. Andre Subject: &mumbles at all levels. To: ALAN at MIT-MC Cc: BUG-LISPM at MIT-MC, LISP-FORUM at MIT-MC, DULCEY at MIT-ML, DLA at SCRC-TENEX Date: 20 April 1982 21:27-EST From: Alan Bawden 2) &list-of STOPS working... (finally I got around to the screw). Does ANYONE use this feature? I could try and duplicate it, but if noone uses it (as I suspect) I would rather just flush it. What to LispMachine people think? (If no one raises objection, I'll ask info-lispm next.) I use &LIST-OF in a couple places. But I don't really care if you remove it; it's a kludge.  Date: 20 April 1982 21:27-EST From: Alan Bawden Subject: &mumbles at all levels. To: BUG-LISPM at MIT-MC cc: LISP-FORUM at MIT-MC, DULCEY at MIT-ML Date: 04/19/82 22:30:38 From: DULCEY at MIT-ML (defmacro foo ((one &optional (two ''two)) &body three) `(list ,one ,two ',three)) >>ERROR: &OPTIONAL -- unrecognized & keyword in DEFMACRO. While in the function ... This probably isn't defined as working. However, it would be useful if it did. Indeed, this is currently defined to be an error. Would anyone object if I actually made it act as it obviously should? I would like to fix this, and simultaneously introduce a general tool for performing macro body parsing. Proposed new special form: BIND-ARGUMENTS example: (bind-arguments ((a &optional (b *b*)) (foo) (barf)) b o d y) (approximately)==> (let ((gensym (foo))) (if (not (and (<= (length gensym) 2) (>= (length gensym) 1))) (barf)) (let ((a (car gensym)) (b (if (< (length gensym) 2) *b* (cadr gensym)))) b o d y)) Now, you probably would never need a macro like this directly, but suppose you had to write defmacro yourself: (defmacro defmacro (name pattern &body body) (let ((v (gensym))) `(macro ,name (,v) (bind-arguments (,pattern (cdr ,v) (ferror nil "Bad syntax: ~S" ,v)) ,@body)))) That was easy wasn't it! So easy that ANYONE can do it. This seems to be the right tool for bringing &mumble-argument-parsing to the masses. Now I already have a working one of these (amazingly usefull in the right situations I must add), and I would like to install it in the LispMachine as the way defmacro etc work. This would have two noticeable effects: 1) &keywords would start to work at all levels in defmacro patterns. I presume no one objects to this? 2) &list-of STOPS working... (finally I got around to the screw). Does ANYONE use this feature? I could try and duplicate it, but if noone uses it (as I suspect) I would rather just flush it. What to LispMachine people think? (If no one raises objection, I'll ask info-lispm next.) Unnoticeable effect: 3) The code produced by defmacro would be smaller and faster. (You would be appalled at the code defmacro currently turns out.)  Date: 22 February 1982 15:35-EST From: 2Lt Eric J. Swenson Subject: Interlisp and MacLisp (almost) To: FININ at WHARTON-10 cc: LISP-FORUM at MIT-MC Thanks for all your help with respect to Interlisp to Maclisp translators. I am forwarding your message to ZHARTMAN@ISIE who works with the folks who are interested in this endeavor. I'll let them handle it from there. I would, however, like to saty in touch with the discussion, so please CC your responses to their questions to me. Thanks. Also, I am interested in the SRI project used to transport large systems. Please give me any pointers to relevant information. Thanks -- Eric  Date: Monday, 1 February 1982 11:38-EST From: HIC at SCRC-TENEX To: Jon L White Cc: common-lisp at SU-AI, LISP-FORUM at MIT-MC Subject: Incredible co-incidence about the format ((MACRO . f) ...) Date: Monday, 1 February 1982 10:47-EST From: Jon L White To: common-lisp at SU-AI cc: LISP-FORUM at MIT-MC Re: Incredible co-incidence about the format ((MACRO . f) ...) One of my previous messages seemed to imply that ((MACRO . f) ...) on the LISPM fulfills the intent of my second suggestion -- apparently there is a completely unforseen consequence of the fact that (FSYMEVAL 'FOO) => (MACRO . ) when FOO is defined as a macro, such that the interpreter "makes it work". However, MACROEXPAND knows nothing about this format, which is probably why the compiler can't handle it; also such action isn't documented anywhere. Of course MACROEXPAND knows about it (but not the version you looked at). I discovered this BUG (yes, BUG, I admit it, the LISPM had a bug) in about 2 minutes of testing this feature, after I told the world I thought it would work, and fixed it in about another two minutes. Thus I believe it to be merely an accidental co-incidence that the interpreter does anything at all meaningful with this format. My "second suggestion" now is to institutionalize this "accident"; it certainly would make it easier to experiment with a pseudo-functional programming style, and it obviously hasn't been used for any other meaning. JONL, you seem very eager to make this be your proposal -- so be it. I don't care. However, it works on the Lisp Machine (it was a BUG when it didn't work) to have (MACRO . foo) in the CAR of a form, and thus it works to have a lambda macro expand into this. Of course, Lambda Macros are the right way to experiment with the functional programming style -- I think it's wrong to rely on seeing the whole form (I almost KNOW it's wrong...). In any case, the Lisp Machine now has these.  Date: 1 February 1982 10:47-EST From: Jon L White Subject: Incredible co-incidence about the format ((MACRO . f) ...) To: common-lisp at SU-AI cc: LISP-FORUM at MIT-MC One of my previous messages seemed to imply that ((MACRO . f) ...) on the LISPM fulfills the intent of my second suggestion -- apparently there is a completely unforseen consequence of the fact that (FSYMEVAL 'FOO) => (MACRO . ) when FOO is defined as a macro, such that the interpreter "makes it work". However, MACROEXPAND knows nothing about this format, which is probably why the compiler can't handle it; also such action isn't documented anywhere. Thus I believe it to be merely an accidental co-incidence that the interpreter does anything at all meaningful with this format. My "second suggestion" now is to institutionalize this "accident"; it certainly would make it easier to experiment with a pseudo-functional programming style, and it obviously hasn't been used for any other meaning.  Date: 30 January 1982 17:39-EST From: Jon L White Subject: The format ((MACRO . f) ...) To: common-lisp at SU-AI cc: LISP-FORUM at MIT-MC HIC has pointed out that the LISPM interpreter already treats the format ((MACRO . f) ...) according to my "second suggestion" for ((FMACRO . f) ..); although I couldn't find this noted in the current manual, it does work. I'd be just as happy with ((MACRO . f) ...) -- my only consideration was to avoid a perhaps already used format. Although the LISPM compiler currently barfs on this format, I believe there will be a change soon? The issue of parallel macro formats -- lambda-macros versus only context-free macros -- is quite independent; although I have a preference, I'd be happy with either one.  Date: 30 January 1982 16:55-EST From: Jon L White Subject: Comparison of "lambda-macros" and my "Two little suggestions ..." To: KMP at MIT-MC, hic at SCRC-TENEX cc: LISP-FORUM at MIT-MC, common-lisp at SU-AI [Apologies for double mailings -- could we agree on a name for a mailing list to be kept at SU-AI which would just be those individuals in COMMON-LISP@SU-AI which are not also on LISP-FORUM@MC] There were two suggestions in my note, and lambda-macros relate to only one of then, namely the first one FIRST SUGGESTION: In the context of (( . . .) a1 a2), have EVAL macroexpand the part ( . . .) and "try again" before recursively evaluating it. This will have the incompatible effect that (defmacro foo () 'LIST) ((foo) 1 2) no longer causes an error (unbound variable for LIST), but will rather first expand into (list 1 2), which then evaluates to (1 2). Note that for clarity, I've added the phrase "try again", meaning to look at the form as see if it is recognized explicitly as, say, some special form, or some subr application. The discussion from last year, which resulted in the name "lambda-macros" centered around finding a separate (but equal?) mechanism for code-expansion for non-atomic forms which appear in a function place; my first suggestion is to change EVAL (and compiler if necessary) to call the regular macroexpander on any form which looks like some kind of function composition, and thus implement a notion of "Meta-Composition" which is context free. It would be a logical consequence of this notion that eval'ing (FUNCTION (FROTZ 1)) must first macroexpand (FROTZ 1), so that #'(FPOSITION ...) could work in the contexts cited about MAP. However, it is my second suggestion that would not work in the context of an APPLY -- it is intended only for the EVAL- of-a-form context -- and I'm not sure if that has been fully appreciated since only RMS appears to have alluded to it. However, I'd like to offer some commentary on why context-free "meta-composition" is good for eval, yet why context-free "evaluation" is bad: 1) Context-free "evaluation" is SCHEME. SCHEME is not bad, but it is not LISP either. For the present, I believe the LISP community wants to be able to write functions like: (DEFUN SEMI-SORT (LIST) (IF (GREATERP (FIRST LIST) (SECOND LIST)) LIST (LIST (SECOND LIST) (FIRST LIST)))) Correct interpretation of the last line means doing (FSYMEVAL 'LIST) for the instance of LIST in the "function" position, but doing (more or less) (SYMEVAL 'LIST) for the others -- i.e., EVAL acts differently depending upon whether the context is "function" or "expression-value". 2) Context-free "Meta-composition" is just source-code re-writing, and there is no ambiguity of reference such as occured with "LIST" in the above example. Take this example: (DEFMACRO GET-SI (STRING) (SETQ STRING (TO-STRING STRING)) (INTERN STRING 'SI)) (DEFUN SEE-IF-NEW-ATOM-LIST (LIST) ((GET-SI "LIST") LIST (GET-SI "LIST"))) Note that the context for (GET-SI "LIST") doesn't matter (sure, there are other ways to write equivalent code but . . .) Even the following macro definition for GET-SI results in perfectly good, unambiguous results: (DEFMACRO GET-SI (STRING) `(LAMBDA (X Y) (,(intern (to-string string) 'SI) X Y))) For example, assuming that (LAMBDA ...) => #'(LAMBDA ...), (SEE-IF-NEW-ATOM-LIST 35) => (35 #'(LAMBDA (X Y) (LIST X Y))) The latter (bletcherous) example shows a case where a user ** perhaps ** did not intend to use (GET-SI...) anywhere but in function context -- he simply put in some buggy code. The lambda-macro mechanism would require a user to state unequivocally that a macro-defintion in precisely one context; I'd rather not be encumbered with separate-but-parallel machinery and documentation -- why not have this sort of restriction on macro usage contexts be some kind of optional declaration? Yet my second suggestion involves a form which could not at all be interpreted in "expression-value" context: SECOND SUGGESTION Let FMACRO have special significance for macroexpansion in the context ((FMACRO . ) . . .), such that this form is a macro call which is expanded by calling on the whole form. Thus (LIST 3 (FMACRO . )) would cause an error. I believe this restriction is more akin to that which prevents MACROs from working with APPLY.  kwh@MIT-AI 01/30/82 10:20:21 Re: minilisp To: lisp-forum at MIT-MC, JoSH at RUTGERS CC: KWH at MIT-AI Bob Kirby at the University of Maryland has a nice LISP for the 11- it is a derivative of Maryland LISP (which runs on UNIVAC's) which is derivative from Wisconson LISP... It has a pretty printer, a MICROPLANNER, a structure editor, and a bunch of other stuff. Bob Kirby is in the computer science department at Maryland, so you might want to get in touch with him- the only problem with any LISP's for the 11 is that you are intrinsically limited by that address space- (unless you hack virtual memory, which is hairy but possible....) Good luck, Ken. p.s. Does MIT have a copy of Rutger's extended addressing LISP? Can we get one? Is there any documentation for it I could get a copy of?  Date: 30 January 1982 07:26-EST From: Kent M. Pitman Subject: Those two little suggestions for macroexpansion To: Fahlman at CMU-20C cc: LISP-FORUM at MIT-MC Date: 28 Jan 1982 1921-EST From: Fahlman at CMU-20C JONL's suggestion looks pretty good to me... ----- Actually, JONL was just repeating suggestions brought up by GLS and EAK just over a year ago on LISP-FORUM. I argued then that the recursive EVAL call was semantically all wrong and not possible to support compatibly between the interpreter and compiler ... I won't bore you with a repeat of that discussion. If you've forgotten it and are interested, it's most easily gettable from the file "MC: AR1: LSPMAIL; FMACRO >".  Date: 28 Jan 1982 1921-EST From: Fahlman at CMU-20C Subject: Re: Two little suggestions for macroexpansion To: JONL at MIT-MC cc: LISP-FORUM at MIT-MC In-Reply-To: Your message of 27-Jan-82 1724-EST JONL's suggestion looks pretty good to me. Given this sort of facility, it would be easier to experiment with functional styles of programming, and nothing very important is lost in the way of useful error checking, at least nothing that I can see. "Experiment" is a key word in the above comment. I would not oppose the introduction of such a macro facility into Common Lisp, but I would be very uncomfortable if a functional-programming style started to pervade the base language -- I think we need to play with such things for a couple of years before locking them in. -- Scott -------  Date: Thursday, 28 January 1982, 02:34-EST From: David A. Moon Subject: Two little suggestions for macroexpansion To: Jon L White Cc: LISP-FORUM at MIT-MC In-reply-to: The message of 27 Jan 82 17:24-EST from Jon L White This exists in the next Lisp machine system, which is about to be released.  Date: 27 January 1982 17:24-EST From: Jon L White Subject: Two little suggestions for macroexpansion To: LISP-FORUM at MIT-MC Several times in the COMMON LISP discussions, individuals have proffered a "functional" format to alleviate having lots of keywords for simple operations: E.g. GLS's suggestion on page 137 of "Decisions on the First Draft Common Lisp Manual", which would allow one to write ((fposition #'equal x) s 0 7) for (position x s 0 7) ((fposition #'eq x) s 0 7) for (posq x s 0 7) This format looks similar to something I've wanted for a long time when macroexpanding, namely, for a form foo = (( . . .) a1 a2) then, provided that isn't one of the special words for this context [like LAMBDA or (shudder!) LABEL] why not first expand ( . . .), yielding say , and then try again on the form ( a1 a1). Of course, ( . . .) may not indicate any macros, and will just be eq to it. The MacLISP function MACROEXPAND does do this, but EVAL doesn't call it in this circumstance (rather EVAL does a recursive sub-evaluation) FIRST SUGGESTION: In the context of (( . . .) a1 a2), have EVAL macroexpand the part ( . . .) before recursively evaluating it. This will have the incompatible effect that (defmacro foo () 'LIST) ((foo) 1 2) no longer causes an error (unbound variable for LIST), but will rather first expand into (list 1 2), which then evaluates to (1 2). Similarly, the sequence (defun foo () 'LIST) ((foo) 1 2) would now, incompatibly, result in an error. [Yes, I'd like to see COMMON LISP flush the aforesaid recursive evaluation, but that's another kettle of worms we don't need to worry about now.] SECOND SUGGESTION Let FMACRO have special significance for macroexpansion in the context ((FMACRO . ) . . .), such that this form is a macro call which is expanded by calling on the whole form. As a result of these two changes, many of the "functional programming style" examples could easily be implemented by macros. E.g. (defmacro FPOSITION (predfun arg) `(FMACRO . (LAMBDA (FORM) `(SI:POS-HACKER ,',arg ,@(cdr form) ':PREDICATE ,',predfun)))) where SI:POS-HACKER is a version of POSITION which accepts keyword arguments to direct the actions, at the right end of the argument list. Notice how ((fposition #'equal x) a1 a2) ==> ((fmacro . (lambda (form) `(SI:POS-HACKER X ,@(cdr form) ':PREDICATE #'EQUAL))) a1 s2) ==> (SI:POS-HACKER X A1 A2 ':PREDICATE #'EQUAL) If any macroexpansion "cache'ing" is going on, then the original form ((fposition #'equal x) a1 a2) will be paired with the final result (SI:POS-HACKER X A1 A2 ':PREDICATE PREDFUN) -- e.g., either by DISPLACEing, or by hashtable'ing such as MACROMEMO in PDP10 MacLISP. Now unfortunately, this suggestion doesn't completely subsume the functional programming style, for it doesn't directly help with the case mentioned by GLS: ((fposition (fnot #'numberp)) s) for (pos-if-not #'numberp s) Nor does it provide an easy way to use MAPCAR etc, since (MAPCAR (fposition #'equal x) ...) doesn't have (fposition #'equal x) in the proper context. [Foo, why not use DOLIST or LOOP anyway?] Nevertheless, I've had many ocasions where I wanted such a facility, especially when worrying about speed of compiled code. Any coments?  Date: 26 Jan 1982 (Tuesday) 0211-EDT From: FININ at WHARTON-10 (Tim Finin) Subject: Interlisp and MacLisp (almost) To: ejs at MIT-MC cc: lisp-forum at MIT-MC I've been working on a general inter-dialect Lisp translation system which specializes in Interlisp to Franz Lisp (which is very close to MacLisp). It is still undergoing developement. It includes a general rule driven translator, a small set of pattern-action rules for translating Interlisp into Franz, and a Interlispy run time environment for Frnaz. At Penn, we've opted for a system that includes both translation and emulation. Some other efforts I'm aware of include the following: - The Franz group at Berkeley have some sort of Interlisp compatability package for Franz. - An extensive Interlisp to MacLisp translation system called MACLISPIFY was written at SRI and used to transport some large systems. I can dig up the details if you're interested. - The Interlisp system includes the TRANSOR package for trnaslating Interlisp code to other Lisp dialects. There is a set of rules for Interlisp to Maclisp, although it is somewhat dated. - There was a Interlisp to Maclisp translation system written by Jack Holloway (I believe) and extended by Dave McDonnald. It was used to translate the LUNAR system to MacLisp and LispMachine Lisp (I think). - There is a group at Stanford on SUMEX that is trying to implement some of the Interlisp packages (e.g. the RECORD package) in Franz. i can supply more details on some of these efforts if you are interested. Tim  Date: 25 January 1982 20:38-EST From: 2Lt Eric J. Swenson Subject: Interlisp and MacLisp To: LISP-FORUM at MIT-MC Is there any such thing as a MacLisp package that simulates an Interlisp environment. We have many program verification and prooving programs written in Interlisp and would like to run them on Multics. Any guidelines as to what action to take? Is a total rewrite necessary? Is there anyone with whom I can put some folks interested in this endeavor in contact with? Thanks. -- Eric  Date: 16 Jan 1982 0115-EST From: JoSH Subject: minilisp To: lisp-forum at MIT-MC is there a decent lisp for a pdp-11? available on the net? for what operating system(s)? thanx --JoSH -------  Date: 12 January 1982 17:38-EST From: Jon L White Subject: 'backquote' actions To: Morrison at UTAH-20 cc: LISP-FORUM at MIT-MC In MacLISP, the variable BACKQUOTE-EXPAND-WHEN determines whether the reader-macro produces a minimal, standard, lisp form which merely 'evaluates to the right thing', or a form with extra evaluator macros inserted which correspond to the places where 'commas' of various kinds appeared. Of course, the latter format 'evaluates to the right thing' too. The LISPM version produces a rather minimal form for evaluation, but using internal function names rather than the standard lisp ones. The point of having either the extra evaluator macros, or the special internal function names, is so that a random piece of code which was constructed up by the backquote macro can be re-parsed into a faithful representation of the original input. The advantage of using evaluator macros as opposed to internal subrs is that automatic code analyzers don't have to know about these internal names (but rather would only have to know about macroexpansion in general and the usual primitive lisp subrs); the disadvantage of using the evaluator macro format is that it takes an extra cons cell or so for each comma in the source input. Note for example that there currently is no way to distinguish between the internal forms of the following two functions: (defun FIVE+ONE (x) '6) (defun SIX (x) (quote 6)) whereas (defun QUOTIFY-1 (x) `',x) (defun QUOTIFY-2 (x) (list (quote QUOTE) x)) are distinguishable when BACKQUOTE-EXPAND-WHEN is set to EVAL (or when other internal markers are left in).  Date: 11 Jan 1982 1601-MST From: Don Morrison Subject: clarification regarding backquote query To: lisp-forum at MIT-AI The sort of thing I had in mind for a complicated macro producing macro was to have a macro producing macro foo consing up the final form, but bits and pieces it stuffs in are being created by a function bar. The value which foo finally returns will be some hairy conglomeration of conses, lists, and appends, such as backquote is particularly good at creating. But the pieces which bar creates will mostly be constant, but sometimes I'd like bar to be able to return a piece which contains a call on unquote, which unquote will be seen by the dynamically surrounding backquote (i.e. in foo), rather than a lexically surrounding one. Perhaps this is too complicated to be done clearly (this description certainly is so complicated that I doubt it would ever be clear) with backquote, but I seem to remember having created such a beast where this seemed more perspicuous than the hard way. Unfortunately I can't remember the exact example, and all the examples I can now dream up don't warrant such a procedure; seems that in all my toy examples simply adding an unquote in the caller and a backquote in the callee works. Perhaps the "dynamic version" is never really useful. And anyway, I gather from some of the replies that there are functional entry points to the required pieces available in MACLISP and friends, though I still don't know their names. -------  Date: Monday, 11 January 1982, 16:44-EST From: Robert W. Kerns Subject: backquote query To: Morrison at UTAH-20 Cc: Lisp-Forum at MIT-AI In-reply-to: The message of 11 Jan 82 11:06-EST from Don Morrison [Sorry for the incomplete message] I'm not sure what your problem with BACKQUOTE is, so I'll try to touch all the bases. Some of this may be obvious. Are you clear on what they mean? The meaning of `(FOO ,BAR) is to perform the same as (LIST 'FOO BAR). Exactly what code is generated is not material to the purpose. You can look at what it does turn into by typing '`(foo ,bar), which would show you that it doesn't happen all at read-time. You don't want to cons up a backquote expression in a macro-defining macro. You want to cons up a piece of code which produces a list of a specific form. For this, you use nested backquotes. For example: (DEFMACRO DEF-NIL-CALLER (NAME SIZE) `(DEFMACRO ,NAME (FUN) `(,FUN ,@(MAKE-LIST ,SIZE)))) (DEF-NIL-CALLER CALL-5-NILS 5) (CALL-5-NILS LIST) ==> (NIL NIL NIL NIL NIL NIL) If you need more info about how backquotes nest, see the discussion in the LISP Machine manual (preferably a recent one). Another way you may be getting confused is by looking at what they read to in the compiler in MACLISP, where it does indeed expand them at READ time for efficiency. There is nothing wrong with this, because there is absolutely no need to cons up commas and backquotes in your code; just use nested backquotes. Or you may be looking at the MACROEXPANDed result in the interpreter, perhaps with the macro having been DISPLACE'ed. Again, type '`(FOO ,BAR) to see what it really read as. I some of this related to your problem.  Date: Monday, 11 January 1982, 16:29-EST From: Robert W. Kerns Subject: backquote query To: Morrison at UTAH-20 Cc: Lisp-Forum at MIT-AI In-reply-to: The message of 11 Jan 82 11:06-EST from Don Morrison I'm not sure what your problem with BACKQUOTE is, so I'll try to touch all the bases. Some of this may be obvious. Are you clear on what they mean? The meaning of `(FOO ,BAR) is to perform the same as (LIST 'FOO BAR). Exactly what code is generated is not material. You don't want to cons up a backquote expression in a macro-defining macro. You want to cons up a piece of code which produces a list of a specific form. For this, you use nested backquotes. For example: (DEFMACRO DEF-NIL-CALLER (NAME SIZE) `(DEFMACRO ,NAME (FUN) `(,FUN ,@(MAKE-LIST ,SIZE))))  Date: 11 January 1982 1307-EST (Monday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-AI Subject: Backquote depth CC: morrison at utah-20 Message-Id: <11Jan82 130737 GS70@CMU-10A> I have seen it nested three deep a couple of times. However, I have only seen "',',"; I haven't ever seen "',',',". --Guy  Date: 11 January 1982 12:21-EST From: George J. Carrette Subject: backquote trivia, for you history fans. To: Morrison at UTAH-20 cc: Lisp-Forum at MIT-AI The PDP-10 Maclisp implementation of Backquote works at macroexpansion time as you wanted. This was done for ease of GRINDEF'ing. The Lispmachine implementation provides special synonyms for LIST, CONS, LIST*, APPEND, etc, and uses a simple pattern matcher to create a cannonical pretty form for GRINDEF. The NIL implementation uses code lifted from code from which the Multics Maclisp backquote was derived, which is also related in some way to the Lispmachine backquote. In all implementations there are "car-position" markers for "," ",@" and ",." and some simple entry to the "BACKQUOTIFY" function called by "`" either at read or eval times. Q: What is the deepest nesting of backquote found to arise in practice? -gjc  Date: 11 Jan 1982 0906-MST From: Don Morrison Subject: backquote query To: Lisp-Forum at MIT-AI I have a question which to those in the know will probably seem rather dense, but for which I cannot see the answer. Why, in MACLISP and friends, is the backquote/unquote mechanism implemented strictly in terms of read macros? That is, why is the search for commas and the like done at read time, instead of the reader simply wrapping a macro around the form, and having the search for commas and such done at macro expansion time? Is this simply an efficiency hack for the interpreter? Several times in writing macro producing macros I have wanted to dynamically create forms containing backquotes and unquotes, and (I think -- maybe I'm missing something) have been unable to do so since they are always expanded at read time. -------  Date: 15 December 1981 20:18-EST From: George J. Carrette Subject: More Lisp History Trivia Quiz To: Masinter at PARC-MAXC cc: Lisp-Forum at MIT-AI, Satterthwaite at PARC-MAXC I noticed that the BLISS-11 compiler did the tail recursion optimization when the routine took its arguments in registers.  Date: 15 Dec 1981 1103-PST From: Tony Hearn Subject: Re: Tail Recursion To: RPG at SU-AI, lisp-forum at MIT-AI In-Reply-To: Your message of 15-Dec-81 1043-PST There was a function in the old LISP 1.5 compiler called PROGITER which did tail recursion removal in most cases. I guess that this dates back to the early sixties. However, Risch's paper is the most complete account of general LISP recursion removal that I have seen. -------  Date: 15 Dec 1981 1043-PST From: Dick Gabriel Subject: Tail Recursion To: lisp-forum at MIT-AI Tore Risch from Upsala has a technical report called something like ``REMREC, a Program for Recursion Removal''. It is report number DLU 73/24 and was published in 1973. Didn't the language semanticists in the 50's and 60's know all about this? -rpg-  Date: 15 Dec 1981 09:28 PST From: Masinter at PARC-MAXC Subject: More Lisp History Trivia Quiz To: Lisp-Forum@MIT-AI cc: Satterthwaite Now that LISP-FORUM beat macros and their history into the ground, how about tail recursion removal? - - - - - - - - Date: 15 Dec. 1981 8:55 am PST (Tuesday) From: Satterthwaite.PA Subject: Eliminating Tail Recursion Can any of you provide some history or a good reference on this [eliminating tail recursion]? I heard the idea described as part of the hallway folklore, long enough ago that I forget the source. The only published references to it that I can find are some of the Scheme papers from 1975-76, but Steele hints that the technique was used in Lisp much earlier. This technique seems to be ignored in the catalogs of optimization techniques for algebraic languages that I have found; are there Pascal, C, etc. compilers that do it?  Date: 9 December 1981 1249-EST (Wednesday) From: Scott.Fahlman at CMU-10A To: lisp-forum at mit-ai Subject: C*R improvement Message-Id: <09Dec81 124912 SF50@CMU-10A> While CADADDAADDDAAADDDR is sexy, it really doesn't go far enough. Much better would be to allow numbers in among the A's and D's, which would serve as multipliers for the following letters. For instance C3A2DAR would be equivalent to CAAADDAR. Of course, we'd have to be careful about radix issues -- better expand this at readtime and look at IBASE then. Better still we could have some sort of escape convention (for now I will call it #?) that lets you insert a variable whose current value (an integer or a string of C's and D's is inserted at that point). So (setq foo 4) (cad#?fooadr x) is eqivalent to (cad4adr x) ==> (cadaaaadr x). This clearly wants to expand at runtime so that the value of foo can be changed to get different effects. One could evan imagine allowing the user to insert arbitrary Lisp expressions in to C*R form, but in my opinion this would be confusing and not very useful. Cheers, Scott  Date: 9 Dec 1981 12:57:25-PST From: CSVAX.fateman at Berkeley To: Guy.Steele@CMU-10A, lisp-forum@MIT-MC Subject: Re: Multiple-valued SQRT Since, in general, numeric functions which return real values do not give a completely correct answer because of numerical approximations, one could argue for interval answers. Complex answers could be in circular regions. For multiple-valued functions, sets of them... In the case of square-root, returning 2 values is not really so useful, since if we agree to return one value (say in the right half-plane), the other one is easy to compute. I suspect that at the point of use of any elementary function routine, there is however a single meaningful response. The provision of multiple-values, sets, intervals, etc.... would be overkill. Prof. Kahan here would argue however, that in the case of some functions, a "reserved object" (not-a-number) makes sense. A rather elegant proposal for computing with such things is described in the IEEE floating point arithmetic standard. As for ways of representing infinite sets, I think SETL might have something on this, and Macsyma allows a kludge like (assume(n,integer), 2*n*%i*%pi). Computing with these things is naturally a royal pain unless they are anticipated.  Date: 9 December 1981 1225-EST (Wednesday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: Multiple-valued SQRT Message-Id: <09Dec81 122542 GS70@CMU-10A> RMS's idea that (MULTIPLE-VALUE (X Y) (SQRT -4)) sets X and Y to 2i and -2i is interesting. There are problems with extending the idea, however. If we also agree that (SQRT X) = (EXPT X 0.5), then we have that (EXPT X 0.25) should return four values, (EXPT X 0.0625) returns sixteen, and so on. Worse yet, complex LOG has an infinite number of values. --Guy  Date: 9 December 1981 00:58-EST From: Richard M. Stallman To: LISP-FORUM at MIT-AI Perhaps (MULTIPLE-VALUE (X Y) (SQRT -2)) should set X and Y to 2i and -2i, in whichever order you prefer. It is, after all, a multiple-valued function.  Date: 8 December 1981 18:47-EST From: Jeffrey P. Golden To: LISP-FORUM at MIT-MC I do not agree with VaughanW.REFLECS that sqrt(-4) "ought" to return (2*i, -2*i) or ERROR. I think most LISP users would expect 2*i or ERROR. (Who's to say "sqrt" doesn't mean principal branch of sqrt, anyway? LISP documentation should make this clear.) After all, how many LISPs now return (2.0, -2.0) for (sqrt 4.0) ?  Date: 8 December 1981 09:53 cst From: VaughanW.REFLECS at HI-Multics Subject: Re: CR To: Kim.fateman at UCB-C70 cc: lisp-forum at MIT-MC In-Reply-To: Msg of 12/07/81 10:36 from Kim.fateman A program that "depends on the badness of code" is an anomaly - and its very dependence on a bug is itself a lurking bug. viz. sqrt(-4) ought to be (2*i, -2*i) and if your package can't return that answer, then ERROR. Of course, you can always claim "but my SQRT isn't supposed to be the mathematical square root" but that's no excuse. The package developer named it SQRT because (s)he either (a) thought it really was the mathematical square root, (b) knew it wasn't, but thought the differences were trivial, (c) knew the differences weren't trivial but thought the user wouldn't realize it, (d) was ordered to call it SQRT. In all cases, SQRT is supposed to be thought of as the true square root. QED. So it ought to behave the same way.  Date: 7 Dec 1981 08:36:30-PST From: Kim.fateman at Berkeley To: lisp-forum@mit-mc Subject: CR It seems to me in my reading of the papers for the Stanford Lisp conference that some Lisp implementation which was described (perhaps in a rejected paper) had a CR function for consistency. CR = (lambda(x) x) I suppose. As for Format: I believe macsyma's MFORMAT is available in Franz; the introduction (perhaps implementation?) of FORMAT for the pdp-10 in Macsyma's code post-dated the implementation of Franz on the VAX. It may get implemented (especially in a Common-Lisp version), but it is not, as of this moment, being funded at UCB. GLS is of course right that if there is a bug in the UNIX routines, the bug should be fixed. There are two barriers, in addition to the need to find the time to do it: 1. Some of UNIX code is "machine independent". Hence some of the code (like sqrt) is truly bad unless you use the special-purpose code that is available (e.g. libnm on VAX). 2. Other programs may actually depend on the badness of such code. The famous feature-vs-bug.. Should sqrt(-4) be 2, -2, 0, or error? Whatever it may have been in the past, someone may need it.  Date: 6 December 1981 1908-EST (Sunday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: C*R and language design Message-Id: <06Dec81 190836 GS70@CMU-10A> JKF doesn't want to impose prejudices on me by not providing C*R in Franz LISP. It seems to me that many other prejudices are inflicted in many other forms: the absence of FORMAT and DEFSTRUCT, the incompatibility of the reader, and others. These all cramp my style in one way or another, the first-mentioned being the most annoying. It seems to me that an implementor will always necessarily impose constraints on a language, willy-nilly, if only because he doesn't have time or other resources to implement them all at once. (Thus it is not entirely reasonable to blame Franz LISP for the 1.0E99 problem. One could argue that if the C libraries are badly designed or contain errors, then they should have not been used, but instead reimplemented or (better yet) fixed. This is not always economically feasible, however, and presumably that was the case here.) The language will only have, at any given point in time, that which the implementor has had the resources (and taste) to install so far. On the other hand, as JKF points out, once a feature is installed, it can shaft a user badly to retract it, much more badly than never having installed it in the first place. It follows that implementors should choose their features carefully and wisely, not only with respect to the order in which things are put in, but also with respect to whether they should be put in at all. Of course, hindsight is easier than foresight. Franz may be stuck with the C*R thing, because of the retraction problem; but the painful experience of many others seems to support the conclusion that it should not be introduced. This experience occurs not immediately, but after many years; it is a Trojan horse or time bomb. As KMP indicates, there is no problem using it to write code; and indeed, it is reasonable for the codewriter to expect consistency in the language. However, the indiscriminate use of C*R eventually leads to chaotic and unmaintainable code, with a severity on a par with the unrestrained and indiscriminate use of GO. GO, like C*R, can be used wisely; but the wise occasions are rarer than those where DO and friends (in the case of GO) or DEFSTRUCT and friends (in the case of C*R) are far more appropriate. I think the very fact that JKF finds it necessary to describe CADDDAR as "the fourth element of the first thing" -- a composite description -- rather than as some simple atomic notion indicates it is already too large to appear in "polite" code. If that is how he thinks of it, then that is how he should write it: (CADDDR (CAR X)). After all, depending on the context there might be another, more meaningful parsing. If I wanted the second element of the body of a lambda expression from a form X = ((LAMBDA ...) ...), then (CADR (CDDR (CAR X))) would be more suitable. Again I argue, the four-A/D instances approach the upper limit of what can be recognized as a single chunk by the eye. The more general question is whether consistency should be paramount in language design, or occasionally subordinated to such other goals as reliability, maintainability, detection of probable typographical errors, and so on. Here is an example from the recent Common LISP deliberations. The MacLISP mapping functions take a function and N lists, for N > 0, where the function must accept N arguments. In my zeal for that kind of consistency called "extending to boundary cases", I suggested that N be allowed to be zero. This is perfectly well-defined. A mapping function terminates when any of the lists runs out. With no lists, it can never terminate. Therefore (MAPC #'(LAMBDA () ...)) would be an idiom for (DO-FOREVER ...). This is all perfectly elegant and consistent, and it was roundly voted down. Why? Because (a) the case of MAP of zero lists is much more likely to occur by typographical accident than by intention; (b) it is not really consistent because with zero lists MAP does something which is conceptually very *different* from a pragmatic standpoint, despite the mathematical elegance; (c) because of (b), it is undesirable for reasons of maintainability even to suggest to the programmer that MAP could be abused in this way, and therefore in consideration of (a) the entire feature is best omitted from the language. A mild analogy arises here with the function CR. Why was the C...R hack not extended to zero as well as to infinity? Perhaps it was merely oversight, but I venture that CADADADADADDAR is somehow "like" or "in the same spirit as" CAR, whereas CR somehow is less so, since it doesn't select (although it is indeed the limiting case of selection, selecting the whole). I do not consider consistency to be paramount. Others no doubt will disagree. The mathematician in me sighed at the rejection of MAP of zero lists. The programmer in me was relieved that such an odd beast would likely not appear in future Common LISP code. --Guy  Date: 6-Dec-81 14:46:32-PST (Sun) From: KIM.jkf@Berkeley (John Foderaro) Subject: Re: Query Via: KIM.BerkNet (V3.69 [12/5/81]); 6-Dec-81 14:46:32-PST (Sun) To: LISP-FORUM@MIT-MC In-Reply-To: Your message of 6 December 1981 15:57-EST Alan, We only have setf in the maclisp compatibility package. If it were to become part of the standard system, it would have to handle all c*r cases, which would not be too hard. GJC, I agree completely that there are things to be fixed up in Franz. Some are out of our control since we use the unix C libraries and some are things we haven't gotten around to fixing. That code that gets printed out after a reader error was inserted for debugging purpose a while back and should have been replaced by explanatory text before the lisp was distributed. Needless to say, I disgree that the only c*r functions that should be defined are car and cdr, and that if the user types (car (car x)) then the compiler changes this to a call to an internal caar. Surely you have to admit that (caddr x) is more readable than (car (car (cdr x))) and certainly easier to type (and remember that the system exists purely for the benefit of the USER, if you insist that only car and cdr are defined then you will have a nice clean system but no one will use it).  Date: 6 December 1981 16:45-EST From: George J. Carrette Subject: c*r To: KIM.jkf at UCB-C70 cc: LISP-FORUM at MIT-MC You are making a rather strong statement when you say you would probably boot the lisp machine down the stairs because it didn't implement C*R. It depends on what you think is important to spend time on fully and consistently supporting. General "language quality" issues are much more important here than issues of "what is clear to everyone who has ever heard of the language lisp." For example, some people might not like the fact that in Franz 1.0E+99 reads in as 1.7014E+38 (i.e. no overflow checking in flonum internalization), or that too many ")" gives the error message "Readlist error, code 3" or that "'." gives the error message "Readlist error, code 4" That may seem ok to somebody used to the error messages provided by the Unix "C" compiler, but your average hacker brought up in a Maclisp/ITS/Emacs=>Lispm environment just isn't going to buy it. There is a totally different perspective here, as evident from some of your examples. If a user on a lispm, (or in Maclisp or Nil for that matter), wanted to get at a deep subpart of a list structure during interactive use they would not think of typing (cadddar x). Most likely they would call upon an interactive structure walker, usually called an "INSPECTOR," to get what they want in the minimum number of key-strokes. Couple this with the fact that mininmum number of keystrokes is not at all an issue inside program source-files, and you can see why this C*R thing is not a bias of implementor against user. The most reasonable thing a lisp can do is probably to define only CAR and CDR, and to insure that the compiler can optimize CAR/CDR compositions into calls to the internal entry points to the usually defined set of C*R functions. -gjc  Date: 6 December 1981 15:57-EST From: Alan Bawden Subject: Query To: KIM.jkf at UCB-C70 cc: LISP-FORUM at MIT-MC Dear Franz, Does it work to do (setf (cadddar x) y) ? (Or do you not have setf?) Does (or would) this work by having setf examine the printname of cadddar? I suppose you could automatically define cadddar as a macro, but then would (mapcar 'cadddar l) work? Sign me: Curious in Cambridge P.S. Does mapatoms ... Oh never mind.  Date: 6-Dec-81 11:56:04-PST (Sun) From: KIM.jkf@Berkeley (John Foderaro) Subject: c*r Via: KIM.BerkNet (V3.69 [12/5/81]); 6-Dec-81 11:56:04-PST (Sun) To: lisp-forum@mit-mc Again let's put this into perspective. Franz has implemented c{a,r}+r closure for many years and I have never heard a complaint or even a comment about it from anyone until gjc just mentioned it. Aparently no one here thinks that it is unusual to define all c*r, they are lisp programmers not implementors and all they want it is a uniform language. If they then moved their code to one of the lisps which only partially implement c*r, they would undoubtably call it a 'hacky' system. Thus when you refer to the c{a,d}r HACK, it is not clear whether you are refering to systems which close over car and cdr or those that do not. As for the implementation, no one here has ever questioned that and I am not even going to say how it is done, since that doesn't have any bearing on this discussion. To the typical lisp programmer, it looks like all c*r functions are defined in the initial lisp. [The clever programmer could watch the storage use counts and figure what functions were manufactured on the fly]. In response to ACW, we don't view the manufacture of the c*r functions as a 'feature' of the system which then the user should have control of. It is simply a action we must take to provide a uniform language. If we could obtain infinite time and space we would define all c*r functons in the initial lisp and throw out the manufacturing. As for the multiple oblist questions, franz only has one oblist, but if it were possible to make new ones then whether they had c*r defined would depend on whether the user asked for a copy of an oblist with c*r defined or whether he asked for an empty oblist. In the former case c*r would be defined, in the later case it would not. Now that we have this defined, why should we as lisp implementors remove it? ALAN, you removed it from the lisp machine. Was it because: 1) users came to you saying "Please, please remove us from the temptation of cdddddr and deliver us unto the the path of data abstraction". 2) it was a poor implementation to begin with and/or it conflicted with a later hack you wanted to make. 3) you felt that people who wrote cdddddr should be punished and lacking 50Kv seats, you figured that a few 'undefined function' error messages should teach them a thing or two. What if you decide tomorrow that 'plus' should have no more than 4 arguments to prevent confusion? Or that the evil 'go' statement will not longer exist. My point is that the implementor should not inflict his prejudices on his community. Regarding what is clear and what is not. It is true that 'caddadr' is pretty confusing but not because of its size, for example 'cadddar' accesses the fourth element of the first part of a list. Thus it is the form of the c*r not the size which makes it understandable or not. My pet peeve are programs which have the capability to do something useful for the user but fail to. If I was sitting a lisp machine with a hairy list structure in front of me and I wanted to see the fourth element of the first subpart of a list, and I typed '(cadddar x)' and it said that cadddar was an undefined function, I would be pretty disgusted. I think that what I wanted to do is clear to everyone who has ever heard of the language lisp, but to have the machine fail to do what it should just because it is distasteful to the implementors is pretty sad and I would probably boot the machine down the stairs.  Date: 6 December 1981 00:42-EST From: Kent M. Pitman To: LISP-FORUM at MIT-AI C*R chains should only be used for accessing homogeneous structures. If you want the CADR of the FOO part of some list, you should write (CADR (FOO l)), not (CADDAR l). If you have some structure so complex that you think you need a C*R chain like CADADDADAR to talk about one of its pieces, it's time you started naming some of those parts. Code involving long chains of C*R cruft is low-level, non-abstract, bit-diddly stuff. I have written it. I have been caused enormous pain trying to figure out what code to change when I reformat that CDADR part of the data structure and all calls to CDADADDR have to be changed to CDADADAR instead. "Oh, if I'd just spent the few seconds writing an abstract macro to do the accessing", I say. But I didn't. Well, adding more C*R functions to the language might just make it easier for someone to screw himself that way. I just don't see the point. I think we should encourage the use of automatic structure defining facilities like DEFSTRUCT and forget about how they're implemented. (CAR (CDR ...)) compiles just as well as (CADR ...), so I don't know why all the flaming about adding some extra symbols to the language if I shouldn't even be using them in code I'd like to feel comfortable talking about in polite company. ps I, also, have written c*r auto-defining functions at some point. several different kinds. now i know better than to waste my time on it.  Date: 5 December 1981 2253-EST (Saturday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: A day without sunshine Message-Id: <05Dec81 225335 GS70@CMU-10A> Of course it is a feature that one can create an obarray without cons; this is what Bawden was arguing for. The question is, can you create one without CAAADADADADR, if the system goes around sneakily creating function definitions behind your back? Apparently Franz LISP does *not* suffer from Bawden's supected bug, however, by virtue of the fact (as best I can determine from the manual and experimentation) that you simply can't create a second obarray in the first place. --Guy  Date: 5 Dec 1981 11:43:32-PST From: CSVAX.fateman at Berkeley To: lisp-forum@mit-mc Subject: a lisp without cons is not like a day without sunshine I think it is a feature that one can create an obarray without cons. You can do it by (remob 'cons) also. Applications built on top of lisp have been known to remob (in effect) lots of things. This is only marginally related to c*r, it seems. If you really want to change the semantics of cons and c*r, you will have to talk to compiler writers, too. I think arbitrary function-name hacking could be (nicely?) supported in Interlisp, via DWIM: on undef. function, editor commands which do intra-name pattern matching could be called into play. I am not advocating this, necessarily.  Date: 5 December 1981 04:32-EST From: Allan C. Wechsler Subject: CDDADADADADADADAAADADADADAR To: LISP-FORUM at MIT-AI This looks like so much fun that I'm going to jump in head first with both feet in my mouth. I'm curious to know who takes care of defining a random CADADAR when it is used. I can think of two approaches. One is to have READ do the work, and one is to have EVAL do it. Regardless of who defines CADADADAR when it is used, there is still a basic philosophical problem. It's been a design philosophy of Lisp that all its features be available to the user. For example, there are special forms in Lisp like SETQ and DO. Lisp gives the user the ability to define new special forms. Similarly, there are read macros like single-quote and the pound-sign macros; these are also fairly easy for the user to define. Now consider the C...R hack. However it's implemented, it's hard to imagine a good user interface to allow users to create similar hacks of their own. Borges said something like "La maravilla y la confusio'n son operaciones propios de Dios y no de los hombres." Just on general principles, it seems a good guess that Lisp features which in their general form /can't be used by users/ ... such features may be marvels, but they are probably also confusions. ---Wechsler PS. I'm not really asking for possible user interfaces. One could imagine a mechanism whereby you could teach READ a regular expression against which it would henceforth check all new symbols' print-names, with a piece of code hanging off the regular expression saying what to do if such a symbol is read. I hope this is conveying some of the reasons for my nervousness about features like the C...R hack. ---ACW  Date: 5 December 1981 03:13-EST From: Alan Bawden Subject: More pointless hot air about cdaddaddaadddaadaaddddaaar To: Kim.jkf at UCB-C70 cc: LISP-FORUM at MIT-MC Date: 4 Dec 1981 22:38:04-PST From: Kim.jkf at Berkeley Guy has implied that we are like naive, starry-eyed novices in our implementation of the closure of the c{ad}r function. His only evidence is his own boyhood experiences. Does anyone have an argument of substance to make against c{a,d}+r (not the use of it, but the mere fact that c{a,d}+r function exists for all positive numbers of a's and d's). Well, I implemented this very same idea for the LispMachine several years back. So you can add me to the list of people who have implemented it. You can also add me to the list of people who think it is a bad idea. I took it out. You wish me to argue against the existance of these functions without reference to whether they will be used. Fooey! Can you really pretend to defend the point of view that we SHOULD have them, even though anyone who uses one should NOT be using it? Perhaps when the reader encounters one it should arrange to send 50,000 volts through the programmer's chair to remind him that he is doing something bad, but of course his code should WORK anyway! We argue against this idea PRECISELY because we cannot imagine a reasonable USE of one. And we all know damned well that if it exists, someone will use it. I'll bet your implementation has the following bug: Create an empty obarray, bind the symbol obarray to it, and call read. If you type "cadadadadar" now you get a symbol with a definition, but if you type "cons" you don't. What makes "cadadadadar" so special? I can imagine lots of bugs like this, and I can only imagine fixes for some of them.  Date: 4 Dec 1981 22:38:04-PST From: Kim.jkf at Berkeley Full-Name: John K. Foderaro To: Guy.Steele@CMU-10A, lisp-forum@MIT-MC Subject: Re: C*R In-reply-to: Your message of 5 December 1981 0047-EST (Saturday) It is possible to describe the location of an object in a lisp list structure with a list of a's and d's, such as (a a d d a d a a). This is a very compact way of representing such an access. In Franz lisp, you can create the accessor function by adding an c and the beginning and an r at the end and concat'ing it. This is very much in the spirit of lisp. the result is a single function which is easily recognizable. Guy has implied that we are like naive, starry-eyed novices in our implementation of the closure of the c{ad}r function. His only evidence is his own boyhood experiences. Does anyone have an argument of substance to make against c{a,d}+r (not the use of it, but the mere fact that c{a,d}+r function exists for all positive numbers of a's and d's).  Date: 5 December 1981 0047-EST (Saturday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: C*R Message-Id: <05Dec81 004759 GS70@CMU-10A> (1) It really should be C(AuD)*D hack, not C(AuD)+R. As RMS first pointed out, CR should be the identity function. (2) As for computer-generated functions, it would seem to be much more in the spirit of LISP to generate (CAR (CDR (CDR X))) rather than (CADDR X) by a program. If you must generate a function reference rather than a form, there is always (LAMBDA (X) (CAR (CDR (CDR X)))). Why do string hacking when lists suffice? My experience, again, is that the C*R hack gets re-invented frequently, usually by starry-eyed novices. This is not an ad hominem attack, but an observation based on my experience. While elegant, it is useless, or at least should not be used. I once was a starry-eyed novice myself, and was quite pleased at inventing this cleverness; but the effort was totally wasted. I don't really care whether Franz retains the feature or not, but if you're going to go whole hog, go all the way and implement CR too. (Then explain to everyone why CR doesn't do what TERPRI does! But that's another story.) --Guy  Date: 4 Dec 1981 21:10:00-PST From: CSVAX.fateman at Berkeley To: gjc@mit-mc Subject: Re: c*r Cc: CSVAX.jkf@Berkeley, lisp-forum@mit-mc George, you still haven't said why you think supporting c*r is bad. A human who uses caadadadadadr habitually at top-level is not applying various lessons about data abstraction, structures, etc. but that doesn't mean he/she should be prevented from using that function. And defining caadadadadadr to mean something ELSE would be strictly bad news. In particular, when such constructions are "computer-created", or used to implement REPRESENTATIONS, c*r seems quite natural. Your line about experience doesn't hold up. While I believe in "ad hominem" arguments as much as the next guy, (see below), what is your point? Do you think KMP's definition of mdo-unless is better than (defmacro mdo-unless (x) (caddddddr ,x)), which is how it could be done in Franz? I think you are just jealous that you didn't think of the c*r hack. (should be really C(AuD)+R hack...)  Date: 4 Dec 1981 20:19:59-PST From: Kim.jkf at Berkeley Full-Name: John K. Foderaro To: lisp-forum@mit-mc Subject: c*r I never claimed that writing code with massive numbers of a's and d's was good. I don't think it is, and I never do it. However, I do write programs which generate programs which create long c*r names. I don't have to worry that only 4 a's and d's are allowed (as I believe is the case in macsyma and lisp machine lisp). The point is: the meaning of consecutive a's and d's are clearly defined in the Franz lisp manual, supported by both the Franz interpreter and compiler. I am proud of the uniformity but no where in the manual do I encourage people to use large strings of a's and d's. However if they want to hang themselves, they are welcome to. As for kmp's data abstraction example, I am sure that he would have done the same thing if he had been able to string as many a's and d's together as he wished, the only difference is that his macro would have been easier to write and understand, imagine someone who has just read a book on lisp asking himself why this expression (CAR (CDDR (CDDDDR X))) was written this way instead of (caddddddr x).  Date: 4 December 1981 2241-EST (Friday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: C*R Message-Id: <04Dec81 224102 GS70@CMU-10A> Besides the lack of data abstraction, there is the problem that long C*R strings are hard to parse. Is CADDADDDADR the same as CADDADDADDR? Even when I occasionally use long C*R strings in my code, I break them up into meaningful units just so you can follow them. My general rule is that all A's must immediately follow a C, except in the two cases CAAR and CADAR. Thus the above two names would come out as (CADDR (CADDDR (CADR X))) and (CADDR (CADDR (CADDR X))). It is easily seen that I only use twelve of the thirty C*R functions of up to four letters: CAR CDR CADR CAAR CADAR CADDR CADDDR CDDR CDDDR CDDDDR and not only can I not count in my head, I forgot that I sometimes use CDAR as well. The point is that each of these names is just within the limit of a "chunk size" that can be grasped immediatel(for me). But if I have to write such messes in more than two places, it's time to drag out DEFSTRUCT or the equivalent. So while it might seem inelegant to have an arbitrary cutoff on C*R strings at four A/D's (and it surely is inelegant), nevertheless it is a good limit to impose on oneself anyway for reasons of readability. When I was young and naive I put this "feature" into my IBM 1130 LISP (this was in 1971), and then discovered that I never ever wanted to use it. --Guy  Date: 4 December 1981 21:16-EST From: George J. Carrette Subject: c*r To: CSVAX.jkf at UCB-C70 cc: LISP-FORUM at MIT-MC, CSVAX.fateman at UCB-C70 From: John Foderaro ... I always though it was ridiculous that in one of the macsyma macro packages there were things like (defmacro caaaaar (x) `(caaaar (car ,x))) ... Indeed, it is truly ridiculous, but how unfortunate it is that you did not ask an experienced lisp programmer about "why" it was ridiculous. Just because I think it is bad to automatically support all possible c*r's does not support your conclusion that I am proud of the fact that macsyma contains (DEFMACRO CAADADR (X) `(CAADAR (CDR ,X))), where a family of about 10 of these macros is defined in the macro module "MRGMAC", which is not defaultly included in the macsyma compilation environment. Use of these macros is only in a very few macsyma modules. On the contrary, the few macsyma programers that used these macros were losing badly. They avoided data abstraction at all costs! The road to unreadable code is paved with vast nestings of Car & Cdr. In contrast, here is something KMP defined for dealing with the "MDO" structure in Macsyma. (DEFMACRO MAKE-MDO () '(LIST (LIST 'MDO) NIL NIL NIL NIL NIL NIL NIL)) ... (DEFMACRO MDO-NEXT (X) `(CAR (CDDDDR ,X))) (DEFMACRO MDO-THRU (X) `(CAR (CDR (CDDDDR ,X)))) (DEFMACRO MDO-UNLESS (X) `(CAR (CDDR (CDDDDR ,X)))) The "c*r" feature of franz encourages some of the things that give LISP a bad name. -gjc  Date: 4 December 1981 13:49-PST (Friday) From: John Foderaro Subject: c*r Via: CSVAX.BerkNet (V3.50 [10/17/81]); 4 December 1981 13:49-PST (Friday) To: GJC@MIT-MC, LISP-FORUM@MIT-MC Cc: CSVAX.fateman@Berkeley In-Reply-To: Your message of 4 December 1981 16:28-EST what?? Of course it is a good idea to support all possible c*r's. The operations of functions with those names are well defined. I always though it was ridiculous that in one of the macsyma macro packages there were things like (defmacro caaaaar (x) `(caaaar (car ,x))). I never brought it up because I assumed that you (maclisp users and macsyma system programers) too thought that it was bad. I never realized that you (gjc) were proud of it.  Date: 4 December 1981 16:28-EST From: George J. Carrette To: LISP-FORUM at MIT-MC I was compiling some macsyma code originaly written in franz and noticed a very strange thing, the use of functions: CADDDDR, CDADADAR, CDDADAR, CAAADAR, and CADAADAR. Indeed, I tried various functions in franz like CADDADDDAAADDAR, and evidently, if you use an undefined function with C*R pname, the car/cdr function gets generated. Well, it isn't a very good idea to have such a feature. -gjc  Date: 25 November 1981 09:43 cst From: VaughanW at HI-Multics (Bill Vaughan) Subject: please add me to the mailing list Sender: VaughanW.REFLECS at HI-Multics To: lisp-forum at MIT-AI cc: VaughanW at HI-Multics Please add me to the lisp-forum mailing list; thanks. Bill Vaughan VaughanW @ HI-Multics  Date: 3 August 1981 20:57 cdt From: VaughanW at HI-Multics (Bill Vaughan) Subject: hi-multics is back Sender: VaughanW.REFLECS at HI-Multics To: sf-lovers-request at MIT-AI, space-request at MIT-AI, cube-lovers-request at MIT-AI, future-request at MIT-AI, info-micro-request at MIT-AI, lisp-forum at MIT-AI, arms-d-request at MIT-AI, poli-sci-request at MIT-AI, energy-request at MIT-AI cc: VaughanW at HI-Multics hi-multics is back on the net; so kindly restore me to the mailing list. Thanks. Bill Vaughan (VaughanW at HI-Multics)  Date: 9 Nov 1981 10:10 PST From: Masinter at PARC-MAXC Subject: macros in Interlisp To: LISP-FORUM at MIT-AI In-reply-to: Guy.Steele@CMU-10A's message of 6 November 1981 1815-EST In-reply-to: KMP@MIT-MC's message of 7 November 1981 01:17-EST In-reply-to: BENSON@UTAH-10's message of 7 Nov 1981 2027-MST The problem seems to be a matter of terminology rather than the effect that the user sees or the actual implementation of macro translation. There is a misunderstanding that "DWIM" in Interlisp means "error" correction. The term "DWIM" has been overloaded to include many different mechanisms, some of which are an integral part of "normal" execution, including macro translation, the record package, iteratives, etc. This doesn't mean that the implementation isn't modular, only that the documentation calls all of them by the same name rather than making up new names for every little feature. The mechanism by which macros get translated, functions get auto-loaded, etc. is a "fault handler" rather than an "error handler" (e.g., the interface to the interpreter is via FAULTEVAL and FAULTAPPLY). An analogy can be drawn to the implementation of virtual memory: the hardware may do virtual to real translation for some pages, but if the page is not in real memory, it goes outside of the emulator/microcode to a macrocode pagefault handler. One can think of macros, auto-load, etc. as extensions to the "virtual" definition space. Once a macro gets translated, the fault handler is no longer called. It has been useful to have the fault-handler user accessible, for a variety of syntax extensions which have not always been prefix-driven. In one light, the problem might be that there is not in Interlisp a "clear division between the interpreter and the error signal/correction facility [as] in Maclisp." I am uncertain where this division should be: is auto-load part of the interpreter? I can imagine situations where you might want to include some sort of data-base lookup to find out where to do the auto-load from -- is that database lookup part of the interpreter too? Finally, I don't think that making the interface from the interpreter to the fault handler compatible was a major problem in producing compatible Interlisps for any of the current implementations, compared to the magnitude of the total task. Larry  Date: 9 Nov 1981 10:10 PST From: Masinter at PARC-MAXC Subject: macros in Interlisp To: LISP-FORUM at MIT-AI In-reply-to: Guy.Steele@CMU-10A's message of 6 November 1981 1815-EST In-reply-to: KMP@MIT-MC's message of 7 November 1981 01:17-EST In-reply-to: BENSON@UTAH-10's message of 7 Nov 1981 2027-MST The problem seems to be a matter of terminology rather than the effect that the user sees or the actual implementation of macro translation. There is a misunderstanding that "DWIM" in Interlisp means "error" correction. The term "DWIM" has been overloaded to include many different mechanisms, some of which are an integral part of "normal" execution, including macro translation, the record package, iteratives, etc. This doesn't mean that the implementation isn't modular, only that the documentation calls all of them by the same name rather than making up new names for every little feature. The mechanism by which macros get translated, functions get auto-loaded, etc. is a "fault handler" rather than an "error handler" (e.g., the interface to the interpreter is via FAULTEVAL and FAULTAPPLY). An analogy can be drawn to the implementation of virtual memory: the hardware may do virtual to real translation for some pages, but if the page is not in real memory, it goes outside of the emulator/microcode to a macrocode pagefault handler. One can think of macros, auto-load, etc. as extensions to the "virtual" definition space. Once a macro gets translated, the fault handler is no longer called. It has been useful to have the fault-handler user accessible, for a variety of syntax extensions which have not always been prefix-driven. In one light, the problem might be that there is not in Interlisp a "clear division between the interpreter and the error signal/correction facility [as] in Maclisp." I am uncertain where this division should be: is auto-load part of the interpreter? I can imagine situations where you might want to include some sort of data-base lookup to find out where to do the auto-load from -- is that database lookup part of the interpreter too? Finally, I don't think that making the interface from the interpreter to the fault handler compatible was a major problem in producing compatible Interlisps for any of the current implementations, compared to the magnitude of the total task. Larry  Date: 7 Nov 1981 2027-MST From: Eric Benson Subject: Re: "Until you try to implement it" To: RMS at MIT-AI, LISP-FORUM at MIT-AI In-Reply-To: Your message of 7-Nov-81 1632-MST I was referring to the error mechanism in Interlisp, and all the hooks in it. It was a tongue-in-cheek, off-the-cuff (tongue-in-cuff?) remark. It was a cheap snipe at Interlisp, and I'm sorry for it. I really do have a lot of respect and admiration for the original implementors of Interlisp. They managed to produce a software development environment that only very recently has had any competition with respect to user-friendliness and system integration. I do think it was a poor design decision to make things like CLISP and macros triggered by the error handler. I feel sorry for those who must produce a totally compatible system on a new machine. Getting back to macros, I agree that they are very simple to implement. Apparently people have different notions of what macros are. I am thinking of the (eval (apply (car x) (list x))) variety. (Referred to, I believe, as computed macros.) I think that any other useful macro type can be implemented using this type. In fact, they tend to be implemented as macros themselves, i.e. macro-defining macros, like defmacro. Having three different kinds of macros doesn't mean you need three different evaluation mechanisms. -- Eric -------  Date: 7 November 1981 18:32-EST From: Richard M. Stallman Subject: "Until you try to implement it" To: LISP-FORUM at MIT-AI What is "it"? The subject of the discussion is macros, and they were implemented in Maclisp dialects long ago (not using the error system). Exactly what is it, then, that KMP is supposed to implement before he is allowed to complain?  Date: 7 Nov 1981 0225-MST From: Eric Benson To: KMP at MIT-MC, MASINTER at PARC-MAXC cc: LISP-FORUM at MIT-MC In-Reply-To: Your message of 6-Nov-81 2317-MST So what's your complaint? Everything that people rave about in Interlisp is handled by the error mechanism. Don't bitch until you try to implement it... -------  Date: 6 November 1981 1815-EST (Friday) From: Guy.Steele at CMU-10A To: masinter at PARC-MAXC Subject: InterLISP Macros CC: lisp-forum at MIT-AI Message-Id: <06Nov81 181522 GS70@CMU-10A> Indeed, my inspection of the manual was too cursory; MACROTRAN is what I was looking for but failed to find. This is in the 1978 manual but not in the 1975 manual. Footnote 44 does explain that the expansion is done only once, through use of CLISPTRAN. This (page 23.57) appears to be the rough equivalent of the MacLISP DISPLACE, and will either use a hash table or replace the macro call with a form (CLISP%_ new . old), similar in effect to the DISPLACED form of MacLISP; the editor and prettyprinter know about CLISP%_ specially and elide it. MACROTRAN does not work if DWIM is not enabled. --Guy  Date: 7 November 1981 01:17-EST From: Kent M. Pitman To: MASINTER at PARC-MAXC cc: LISP-FORUM at MIT-MC Date: 5 NOV 1981 2158-PST From: MASINTER at PARC-MAXC Macros in Interlisp are a separate and powerful facility... MACROS are integrated into the system. Macros get expanded during interpretation... Date: 6 Nov 1981 14:21 PST From: Masinter at PARC-MAXC ...The documentation for Interlisp macros is in the chapter on the compiler. Pages 18.12-13 explains the MACROTRAN facility which expands macros during interpretation. The documentation is not particularly clear that macros by default expand once at interpretation..., but it does say it. I believe the description of macros and macro expansions should indeed be moved out of the compiler chapter and into a section on the interpreter/functions, etc. for the next Interlisp manual revision. ----- According to the Oct 78 copy of the Interlisp manual, p18.12-13 MACROTRAN is a package that enables the user to run programs interpretively which contain calls to functions which are only defined in terms of a compiler macro. MACROTRAN is implemented via an entry on dwimuserforms (Section 17). [Footnote: and thus will not work if DWIM is not enabled] When the interpreter encounters a form car of which is undefined function, macrotran is called. If car of the form has a MACRO property, the macro is expanded, and the result of this expansion is evaluted in place of the original form... Certainly you will be hardpressed to convince me that this is "integrated" into your system. There is a clear division between the interpreter and the error signal/correction facility in Maclisp. Functions, macros, and AUTOLOAD-type things are not errors and are no business of any DWIM-like facility from my point of view. Coupling it to something so conceptually unrelated as macros is very poor modularity at both a conceptual and implementational level. What if someone wanted to turn off DWIM? Would you argue that he should have to not have macros? They certainly are a proven tool and I don't think that's a reasonable penalty to pay. I applaud your idea of moving macros out of the compiler section but I hope the move will involve something more than just picking up the text and indexing it elsewhere ... The change involves a shift in philosophy about what is undefined. MACROs are certainly defined. 'Tis true they cannot work in all places functions can, but that's far from undefined. It just means APPLY, and MAP have a bit of a hard time because there is no full macro form. The LispM's substs are constrained enough to work like functions though, and Macsyma's macros don't operate on whole forms so they map and apply. If macros continue as they are implied to be in current doc as a hack fix to essentially a bug situation in the interpreter, then I don't know that I'd agree that you have them at all in your language ... in your system perhaps, but not in your language. And I think the language is where it's at. Abstraction is the stuff out of which big programs are built and needs to be addressed at the heart of the system, not as an add-on... -kmp  Date: 6 Nov 1981 14:21 PST From: Masinter at PARC-MAXC Subject: Re: InterLISP Macros In-reply-to: Guy.Steele's message of 6 November 1981 1608-EST (Friday) To: lisp-forum at MIT-AI The documentation for Interlisp macros is in the chapter on the compiler. Pages 18.10 - 18.13 talk about macros. Pages 18.12-13 explains the MACROTRAN facility which expands macros during interpretation. The documentation is not particularly clear that macros by default expand once at interpretation (the description focuses on the mechanism by which the translation is implemented rather than the general effect), but it does say it. I believe the description of macros and macro expansions should indeed be moved out of the compiler chapter and into a section on the interpreter/functions, etc. for the next Interlisp manual revision.  Date: 6 November 1981 1608-EST (Friday) From: Guy.Steele at CMU-10A To: Masinter at PARC-MAXC Subject: InterLISP Macros CC: lisp-forum at MIT-AI In-Reply-To: Richard M. Stallman's message of 6 Nov 81 03:00-EST Message-Id: <06Nov81 160834 GS70@CMU-10A> While InterLISP may indeed have various kinds of macros, there is no evidence that I can find in the 1975 or 1978 manuals to indicate that the interpreter knowsanything at all about them. They seem to be discussed only in the compiler chapter, and only describe their effect within a function to be compiled. I think it would therefore be quite understandable if someone were to read this and conclude that the facility was unlike the MacLISP facility whose very point is that it behaves the same, in the absence of bizarre side effects, in both the compiler and the interpreter. If InterLISP macros worked only in the compiler, then one would have to provide a separate definition for use in the interpreter. Will the current state of things be documented in the 1981 InterLISP manual (I'm extrapolating a three-year frequency from two data points!)? --Guy  Date: 6 November 1981 03:00-EST From: Richard M. Stallman To: LISP-FORUM at MIT-AI If Interlisp now has computed macros, that is fine, but don't go around accusing people who believe otherwise of having a "mind bug". It is no bug to believe something which was once true, in the absence of information to the contrary. If they were added in 1978, that still left many years for Maclisp people to come across the deficiency. It is reasonable to correct this statement if it is no longer true, but please don't insult me for having found it out in the past.  Date: 5 NOV 1981 2158-PST From: MASINTER at PARC-MAXC Subject: Macros in Interlisp To: LISP-FORUM at MIT-AI Macros in Interlisp are a separate and powerful facility. There are actually three kinds of macros in Interlisp, as well as several other facilities which provide features which are often provided by macros in other Lisp dialects: Substitution macros (which I assume is what RMS was referring to) allow one to give a substitution template for a form, e.g. if BINDVAR's macro is ((VAR VAL . FORMS) ((LAMBDA (VAR) . FORMS) VAL)) then (BINDVAR X Y (FOO)) -> ((LAMBDA (X) (FOO)) Y) Computed macros correspond to the MacLisp style of macro, where a variable is bound to the macro body, a form is evaluated, and the result of the evaluation is used as the macro translation. While computed macros are more powerful than substitution macros, the substitution macros have the advantage that they are often more concice, and have the important property that the result of macro expansion depends only on the macro instance and the macro definition. LAMBDA macros correspond to "inline" definitions, e.g., if the macro for FOO is (LAMBDA (A B C) (MUMBLE)) then (FOO X Y Z) expands to ((LAMBDA (A B C) (MUMBLE)) X Y Z). MACROS are integrated into the system. Macros get expanded during interpretation (with the macro translation hashed off of the actual macro body so that while translation only happens once, the original source is left intact for pretty printing.) If the user maintains a masterscope database about his functions, Masterscope will print a warning about which functions need recompilation when a macro is changed. [This is conservative only for substitution and LAMBDA macros, since expansion of computed macros could depend on arbitrary parts of the compile/runtime environment]. In addition to Macros, Interlisp allows user-defined "CLISP" forms (e.g., the PUSH, POP, CHANGE etc. expressions were originally done this way) and extensions to the iterative expressions which allow user-defined iteration keywords. Finally, the RECORD package allows declarations of data structure types which are in actuality macro expansions, via the ACCESSFNS record type. While this can be thought of as a convenient way of packaging together a related set of macros, these ACCESSFNS records are integrated with the record package to provide for "creation" of new instances, record path tracing, etc. These facilities have been in place at least since 1978. The misconception about macros in Interlisp seems to be a common mind-bug. I found references to Interlisp's lack of macros in Winston&Horn, and also in Charniak, Riesbeck and McDermott's book. Nonsense. Larry  Date: 5 November 1981 23:57-EST From: Richard M. Stallman To: LISP-FORUM at MIT-AI The things in Interlisp that are called "macros" are not the things that we know and love. They are equivalent in power to DEFSUBST.  RG@MIT-AI 11/05/81 12:57:54 Re: Lisp macros To: ALAN at MIT-MC CC: LISP-FORUM at MIT-MC Q32 lisp, written by R Saunders around 63-64 made heavy use of macros. However, it was a compiler-only LISP and it was thought at the time that macros were something that had to do only with compilation. I think an early version of PDP6 Maclisp was the first to have macros as part of EVAL. Another early system which I think had macros was m-416 lisp written by Tim Hart. A collection of lisp papers published by III and edited by Edmund C Berkley had quite a bit of info on the lisps of the day, including a complete listing of the Q32 lisp compiler.  Date: 5 Nov 1981 09:26 PST From: Deutsch at PARC-MAXC Subject: Re: History of Lisp macros In-reply-to: ALAN's message of 4 November 1981 20:25-EST To: Alan Bawden cc: LISP-FORUM at MIT-MC, Teitelman I was using macros heavily in Interlisp in the late 1960's, and I think they may even have been around in BBN-Lisp (the predecessor of Interlisp) in the mid-60's. Warren Teitelman should be able to give you a more accurate answer.  Date: 4 November 1981 20:25-EST From: Alan Bawden Subject: History of Lisp macros To: LISP-FORUM at MIT-MC Can anyone tell me anything about the history of Lisp macros? I seem to remember that the idea was introduced during the early seventies into MacLisp. I thought that there was an MIT-AI memo or some such document that actually contained the original proposal, but I can't seem to find anything like that. Anybody know the real story? Do macros go back further than this? Was MacLisp really the first Lisp to have them? Thanks in advance. -Alan  Date: 29 October 1981 17:03-EST From: George J. Carrette To: LISP-FORUM at MIT-MC Just thought this may be interesting to some people, the pretty-printer used in NIL was directly lifted from the Lispmachine, and ran with trivial modifications to the STREAM interface, plus the replacement of calls to LISTP with calls to PAIRP. The LISPM documentation mentions that LISTP may be changed in the future such that (LISTP ()) will be T instead of NIL. In NIL (LISTP ()) => T presently, which broke the lispm grinder code. It is speculated that LISTP will be flushed from the NIL language specification, as it is not of much use, which is to say that it has never been used, and its "popular" definitions are incompatible with the proper inductive definition of what a LIST is. Two predicates which are of interest are PROPER-LISTP, and CARABLEP. -gjc  Date: 5 October 1981 00:07-EDT From: Daniel L. Weinreb Subject: Mary had a moby lambda To: RWG at MIT-MC, LISP-FORUM at MIT-MC In my opinion, &AUX is typographically ugly and confusing; I have a lot of trouble reading programs that use it. I guess people have different taste about these things. I really prefer the way LET looks and I never use &AUX any more. I agree that LET should be able to handle multiple values; destructuring (as in generating calls to CAR or AREF) is too high-level for LET, but the basic handling of values in function calls is not too high-level. HOWEVER, I insist that this has nothing at all to do with lambda-lists. LET is a special form for binding variables to values, and it has nothing to do with function calling and argument passing; the latter things are what lambda-lists are about.  Date: 3 October 1981 21:36-EDT From: George J. Carrette Subject: NIL LET. To: LISP-FORUM at MIT-MC In the various syntax processing parts of NIL, the compiler, the macros, the interpreter, etc. we find a destructuring let allowing various imbeded keywords to be very useful. The implementation is somewhat unconventional, so I'll just give an example in this short note: (deform (dovector (&symbol elem vector &optional index) &rest body) ... do some stuff ...) The destructuring construct has been carefully optimized to provide: [0] recursive uniformity. [1] concise and obvious expression of how to break up a form. [2] small inline codesize. [3] superb error checking with maximal context provided in descriptive error messages. [4] recoverable errors in syntax processing. for ex. on-the-fly editing. Thats the end of my VT-52 screen, bye bye for now. -gjc  Date: 2 Oct 1981 15:20 PDT From: Masinter at PARC-MAXC Subject: LAMBDA extension To: LISP-FORUM at MIT-MC Interlisp has used the notion of alternate LAMBDA-words in order to extend the syntax of variable binding styles. The LAMBDATRAN package is described in the Interlisp Reference Manual pp 24.32-24.33. For example, the DECL package, which adds type declarations to Interlisp, uses LAMBDATRAN to define a new LAMBDA-word DLAMBDA, e.g. (DE FOO (DLAMBDA ((A FLOATP) (B FIXP) (RETURNS LISTP)) --] The QLISP implementation used LAMBDATRAN to implement QLAMBDA. The implementation of LAMBDATRAN involved advising/redefining the functions for accessing argument lists and types (NARGS, ARGLIST, FNTYP, etc.) to operate on the translation, and hooks in the interpreter-fault-handler and compiler. Larry  Date: 2 October 1981 1458-EDT (Friday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: Forwarding lost copy of mail from BENSON Message-Id: <02Oct81 145836 GS70@CMU-10A> - - - - Begin forwarded message - - - - Date: 30 Sep 1981 1439-MDT From: Eric Benson Subject: Re: LAMBDA syntax counter-proposal To: Moon at MIT-MC cc: Guy.Steele at CMU-10A, Lisp-Forum at MIT-MC In-Reply-To: Your message of 30-Sep-81 1348-MDT Via: UTAH-20; 30 Sep 1981 1639-EDT "The syntax of the language should be designed for the convenience of human beings, not for the minor convenience of some program". Absolutely! So why, after 20 years, are we still writing programs in the "machine language" of Lisp, parenthesized lists? Obviously, only for the "minor convenience" of EVAL. We human beings find it much easier to parse programs with infix operators and the like, prettyprinters and flashing left parens notwithstanding. What's more, Lisp is one of the easiest languages available in which to write a parser or prettyprinter. My advice is: Make lambda-lists easy to take apart by programs; that's what they're for. But give us mere mortals some syntax, and not just in lambda-lists either. -- Eric ------- - - - - End forwarded message - - - -  Date: 2 October 1981 02:10-EDT From: Kent M. Pitman To: LISP-FORUM at MIT-MC Date: 2 October 1981 01:52-EDT From: Earl A. Killian I don't object to what you're proposing; it is essentially what I was saying about LAMBDA being a macro that expands into something simpler but more parsable, except you're breaking it up into multiple special forms (VANILLA-LAMBDA, SPECIAL-LAMBDA and FUNCTIONAL-LAMBDA) instead of one. That's ok with me.  Date: 2 October 1981 01:38-EDT From: Kent M. Pitman Subject: LAMBDA proposals and counter-proposals To: EAK at MIT-MC cc: LISP-FORUM at MIT-AI Your last note addresses an interesting and important problem of how to scope the declaration correctly. I am familiar with the LET/LET* problem with propagating declarations around right. It's a pain. But I think the problem is caused by the fact that you really have a request that a different kind of binding be done and you're trying to overload LAMBDA to make it be able to bind several different kinds of cells. So my answer is to create a different kind of binding primitive. ie, let the user write (DEFUN F (X &SPECIAL Y Z) ...code...) if he feels strongly that that's how he wants to visualize it, but let it macroexpand into: (SETF #'F #'(VANILLA-LAMBDA (X G0001 Z) ((SPECIAL-LAMBDA (Y) ...code...) G0001))) A smart compiler will make the right code for this without blinking. A not-so-smart one could easily enough be tought to look for this idiom and not compile the extra stack push. The loss of time in the interpreter would likely be negligible. With appropriate macro expansion and printer hooks, the user should rarely have to look at this visually. BUT the great thing is that code-walkers can really handle this kind of stuff VERY elegantly. Similarly, suppose you had an &SPECIAL keyword which was sticky and could go in LET*'s as given here: (LET* ((W 1) &SPECIAL (X 2) (Y 3) &UNSPECIAL (Z 4)) ...code...) Then it might expand into the conceptually elegant underlying form: ((VANILLA-LAMBDA (W) ((SPECIAL-LAMBDA (X Y) ((VANILLA-LAMBDA (Z) ...code...) 4)) 2 3)) 1) which is explicit, parse-free, and which does not have the LET/LET* declaration scoping problem you alluded out in your note. Such an approach does mean that you would have to have several kinds of binding primitives around -- eg, VANILLA-LAMBDA -- does the really primitive kind of binding which the compiler is allowed to assume is essentially lexical. SPECIAL-LAMBDA -- binds the variable's special cell always. FUNCTIONAL-LAMBDA -- binds the variable's function cell. These are the only ones I think you would need at the lowest level. Perhaps one or two more could be shown to be necessary. But the nice thing is that they all have very simple meanings and are very easy to be manipulated by code-manipulating programs. I think that a basing a language on the composition of small number of well-understood primitives -- even if it means that we have to create some that have not been needed until now -- is far better than trying to do everything all in one place.  Date: 2 October 1981 00:13-EDT From: Bill Gosper Subject: Mary had a moby lambda To: LISP-FORUM at MIT-MC I don't buy the idea that because lambda is fundamental, lambda lists ought to remain simple, at possible cost in convenience. What prevents us from decoupling the fundamental binding mechanism from this useful and rapidly evolving function preamble, which is called "lambda list" for purely historical reasons? Then we would be free to optimize for ease and clarity. (If we could but define them.) Take poor &AUX. Intellectually, I agree with its detractors, but typographically, the corresponding LET usually adds vertical attenuation, which is sometimes exacerbated by the extra indentation. So my code usually LOOKS BETTER with &AUX. Also, here is a semantic desideratum for lambda list architects. Multiple values, though still short of first class citizenship, are very fundamental. Currently, we can (in one &AUX or LET) bind any number of variables to singleton values, but we must nest a separate MULTIPLE-VALUE-BIND for each tuple of values. I'd like to see a :MULTIPLE syntax to permit capturing tuples and singletons all at once.  Date: 1 October 1981 23:21-EDT From: Earl A. Killian Subject: LAMBDA proposals and counter-proposals To: kmp at MIT-AI cc: LISP-FORUM at MIT-AI I don't like the suggestion that declarations in LAMBDA lists are unnecessary because of LOCAL-DECLARE and DECLARE. First of all, the semantics are different. Consider (LAMBDA (A (SPECIAL B)) ... (LAMBDA (B) ...) ...) and (LOCAL-DECLARE ((SPECIAL B)) (LAMBDA (A B) ... (LAMBDA (B) ...) ...)) I would expect the second B to be unspecial in the first example, and special in the second, and I prefer the semantics of the first. Also, LOCAL-DECLARE is less readable (the LISPM manual even recomends that DECLARE be used instead of LOCAL-DECLARE as a result). The problem with DECLARE is that it loses when you want to use it with macros that generate lambdas. E.g. suppose I want to define a LET!, then I have to have it specially check for leading DECLARE forms and put them directly after the LAMBDA instead of with the rest of the body.  Date: 1 October 1981 21:20-EDT From: George J. Carrette To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Don't forget, CGOL had SETF and destructuring LET, not to mention a Defmacro (general "DEFINE") long before "paren-level" lisps had these features.  Date: 1 October 1981 19:49-EDT From: Kent M. Pitman Subject: LAMBDA proposals and counter-proposals To: MOON at MIT-AI, Guy.Steele at CMU-10A, EAK at MIT-MC cc: LISP-FORUM at MIT-AI There are MANY issues being addressed at once here. In the end, all will have to be addressed. However, it might be of help for people to clearly argue for or against one issue at a time, rather than mixing them. Here is a sampling of the kinds of issues that I feel are becoming too confused with each other; I think discussing the sub-issues separately will help us build a better foundation for a later discussion of the whole issue -- I fear we are currently trying to solve too much all at once... * HOW SHOULD EXTENSIONS BE INTRODUCED INTO THE LANGUAGE? If we have one hairy construct to which we continue to add subtle, concise additions to, is that the right thing? (eg, the introduction of new &keywords) Or are we better off with slightly more verbose constructs each of which has an independent function? (eg, using LET instead of &AUX, LOCAL-DECLARE instead of &SPECIAL, etc.) This question is made more complex by the possibility of making DEFUN and perhaps even LAMBDA macroexpand from concise though syntactically hairy constructs into larger but syntactically more transparent underlying forms. ie, one might write (DEFUN F (X &AUX Y) ...) but get (DEFUN F (X) (LET* (Y) ..)). This is an important issue because it decides whether Earl's note about his concern over extensibility is worthwhile to consider. It is only worthwhile to worry about boxing yourself into a corner with respect to extension if you believe that extension comes from modifying existing operators/special-forms rather than creating new ones. This is an issue for which I do not believe we have a clear answer. * WHAT EVEN BELONGS IN THE LAMBDA LIST? My feelings are that a lambda list is a place for naming formal quantities that are not known about locally. There are surely opposing viewpoints. There are several isolatable sub-issues: * &AUX variables are of a clearly different nature and do not, in my opinion, have any business being in the lambda list at all. * Destructuring. Is this part of the specification of the formal quantities or not? I am very wishy-washy about this, but lean toward the side of not wanting it in the bound variable list. * What about advice to the evaluator? This has separable sub-issues as well: * Eval/Quote information. This has to do with stuff that is to be done before the lambda expression ever gets hold of things, so is not rightly part of the things the lambda should contain. eg, fine if DEFUN wants to know about "E and you want to type in (DEFUN F (X "E Y) ...) but I would prefer that this macroexpand into (PROGN (ASSOCIATE-EVALUATION-INFORMATION 'F '(EVAL QUOTE)) (SETF #'F #'(LAMBDA (X Y) ...))) keeping the actual lambda list clear of such worthless clutter. It is silly for an anonymous lambda expression to bother holding onto information about EVAL and QUOTE since it can't do anything useful with it. * Keywording Information about keywords and how to handle them correctly is VERY complex and I don't think there is any kind of good theory of that available at all. I would recommend not cluttering primitive lambda lists with keyword information that is not well-agreed to be the obvious right thing. * Declarations, Argument type-checking This hasn't been recently addressed by anyone, but probably should be. These kinds of things may want to happen elsewhere besides at binding points, so need special forms like LOCAL-DECLARE, DECLARE, etc. Having their own special forms, is it reasonable to clutter the bound variable list with a new notation for inferring things that could already be expressed elsewise. * Kinds of bindings: &FUNCTION, &SPECIAL, ...? Where do these belong? Are they part of the specification of the formal quantities? Is &FUNCTION F anything more than another dimension on the namespace issue? Should it be treated as such? * HOW SHOULD SYNTAX MANIFEST ITSELF IN A LANGUAGE? Dave gives me the impression that he feels that infix syntax at the lowest level is acceptable if it buys convenience. There are others who clearly do not see that this should be so. The LOOP macro buys such syntactic convenience without any modification to the notion that the underlying representation can be parse-free. Even Pratt's CGOL, which redefines all of the syntax of Lisp in a rather questionable way, does not have any problem coming up with appropriate parse-free underlying representations. Whether the lowest level of the language must be something one can fathom only with the aid of a parser or not seems to be of critical importance. As a closing note, I should add that I muchly support GLS's alternate lambda list proposal. I have had similar ideas in the past and I feel that any loss of expressive power it has can be made back up primarily by use of other existing primitives (like LET, DECLARE, ...) and possibly introduction of a small number of new primitives with special-purpose tasks. I also agreed with DLW's recent reply to EAK's and GLS's notes. -kmp  Date: 1 October 1981 01:25-EDT From: Earl A. Killian Subject: LAMBDA syntax counter-proposal To: dlw at MIT-AI, Guy Steele at CMU-10A cc: LISP-FORUM at MIT-MC I also think the GLS suggestion is a loser. By being concise, it eliminates the possibility of growth. I'd want it to at least support some sort of type information for those places where that is appropriate (which is admittedly nowhwere on the the LISPM).  Date: 1 October 1981 0007-EDT (Thursday) From: Guy.Steele at CMU-10A To: Eric Benson Subject: B.S. CC: lisp-forum at MIT-MC In-Reply-To: Eric Benson's message of 30 Sep 81 15:39-EST Message-Id: <01Oct81 000733 GS70@CMU-10A> Maybe infix syntax *is* easier to read -- it is certainly more concise -- when there are no more than a couple of dozen functions involved, and none takes more than two arguments, and there are enough symbols to go around. Indeed, there are not enough ASCII characters to go arond even for the common operators, so PASCAL must use the abomination "<>" for "not equals". (Why an abomination? Because for some of the types in PASCAL to which "<>" applies, the concept of "not equal" is not the same as the idea of "< or >"! Sets are an example.) Similarly FORTRAN uses ".AND.", ".LE.", and so on. When you have an extensible language, however, such as LISP is, one quickly runs out of meaningful symbols. I have had some experience with a LISP-like language with a user-extensible ALGOL-like syntax, namely EL1 at Harvard. Maybe you can tell what A * X + C > Z is supposed to mean, but how about A @> B <= C <==> D ? Could you really remember the precedences of ten new operators just defined on the preceding two pages of a program you were reading? Probably not. And you have to have precedence rules of some sort; otherwise everything must be parenthesized anyway anyway, and you lose the advantage of infix over prefix. Can you even remember the nine precedence levels of PL/I? That's why APL, which doesn't even let the user define special symbols, has no relative precedences, but only a uniform right-to-left parsing rule. When you have functions of more than two arguments, infix syntax breaks down, and you have to use prefix after all in most algebraic languages. (In APL you simply are not permitted to have functions of more than two arguments!) In closing, let me quote two more experienced people than I. McCarthy, in his paper on the history of LISP for the ACM History of Programming Languages Conference, explains that LISP was intended to have an infix-style syntax for programs, the parenthesized format being used only for data. A data parser (READ) was written to accompany other system functions such as CONS and ASSOC. It was only then that it was realized a universal interpreter (EVAL) could be written which would treat data objects as programs. This was much easier to code than the program parser, and once it existed it caught on and no one ever got around to writing the infix parser. Moreover, McCarthy speculates that this very accident was one of the important reasons for the persistence of LISP for twenty years, because it made it so much easier to realize that programs and data could be intermingled, which is of great importance in many LISP applications, expecially in AI. The second quote: at the APL '79 conference, Alan Perlis, a great fan of APL, got up to speak on the differences between LISP and APL. The conclusion of his talk was as follows (I paraphrase): "There are two lessons APL must learn from LISP to thrive. (1) Functions must become manipulable data objects of the language. (2) We have to get rid of the crappy infix syntax!" --Guy  Date: 30 September 1981 2332-EDT (Wednesday) From: Guy.Steele at CMU-10A To: David A. Moon Subject: Re: LAMBDA syntax counter-proposal CC: lisp-forum at MIT-MC In-Reply-To: David A. Moon's message of 30 Sep 81 14:48-EST Message-Id: <30Sep81 233209 GS70@CMU-10A> Well, one might argue that supplied-p parameters are only pseudo-parameters or meta-parameters, but on the whole your point is well taken. Instead, I'll merely suggest that the correspondence between the parameter list and a list of the arguments is "simple and natural". You are also quite correct that a syntax shouldn't be designed to make the compiler twelve lines shorter. However, I sincerely believe with at least 60% of my brain that the proposed syntax actually is at least as convenient for humans as the current &-syntax. It is certainly true that the proposal does not lend itself to extension. Recall rule [5] of the proposal: "That's all." The proposal was meant as an example at one extreme of the design spectrum; it provides the minimal capability in a simple way, and is not intended ever to be extended. Now maybe that is not the design philosophy we want, and 20% of my brain agrees (20% is still undecided), but that's another story. As for call-by-keyword syntax, I am presently somewhat against putting that into the base language. However, I would be glad to have my mind changed by a super-winning proposal. (So far I haven't seen a good proposal which discusses the syntax of calls thoroughly and also suggests feasible implementation mechanisms.) --Guy  Date: 30 September 1981 15:48-EDT From: David A. Moon Subject: LAMBDA syntax counter-proposal To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC I don't think this proposal is much of a win. It leaves no place to put syntax for such things as keywords. Also it isn't true that there is one list element for each parameter, since the supplied-p variable for an optional is itself a parameter but does not have its own list element. It seems completely wrongheaded to me to design the syntax of one's language to make the compiler (actually, the function that parses lambda lists) be a dozen lines smaller. The syntax of the language should be designed for the convenience of human beings, not for the minor convenience of some program. If there are many programs that want to take apart lambda-lists, the function to do that (which exists in different form in all implementations) can easily be standardized and made part of the language. We don't design our array implementation so programs can get the dimensions by calling car and cdr on the array; why should we design our argument-description syntax that way?  Date: 30 September 1981 02:31-EDT From: Daniel L. Weinreb Subject: LAMBDA syntax counter-proposal To: lisp-forum at MIT-MC I am much happier with GLS's new proposal than anything I have ever seen before. This format is very easy for a program to parse. It is also easy for people to read -- it is even less visually obnoxious than our present format, whereas the more complex keyword-oriented proposals are definitely more visually obnoxious. I disagree with the philosophy put forth in EAK's recent mail -- I do not think that it should be a goal that the format of the lambda list be extensible. I do not think that lambda lists are a place in which creeping featurism should be allowed to fester. We are talking here about the basic foundation stones of the language. This is not a place to add random hair. If you want hair, you can build it with macros, but the macros should expand to code that does what you want, NOT into hairy lambda lists with extra keywords and features. As an example, I think that &aux variables are a bad thing. We already have several good syntaxes for letting you bind variables; &aux is now completely superfluous. &aux comes from CONNIVER, in the early days of Maclisp. Since then, LET has been installed, and it is far better for this purpose. &aux has outlived its usefulness and is obsolete. I'm also inclined to retain rule 4; the need for the two-pass processing seems semantically bad (and also looks hard to implement without an efficiency loss in accessing the required arguments that are past the optionals (since their stack location is no longer constant)). I am strongly in favor of this proposal.  Date: 29 September 1981 20:38-EDT From: Earl A. Killian Subject: LAMBDA syntax counter-counter-proposal To: LISP-FORUM at MIT-MC It is clear to me from the discussion so far that there ought to be a primitive that is very simple for lambda binding. Then various styles can be implemented as macros that use the primitive. I think this is in keeping with the general spirit of LISP. The "syntax" of the primitive would be designed without concern for readability, etc., but rather simplicity and extensibility. I think &-lambdas fail the simplicity test, and GLS's proposal fails the extensibility test. RMS's proposal would be a good primitive if simplified (e.g. disallow (OPTIONAL A B) in favor of (OPTIONAL A) (OPTIONAL B) and eliminate AUX). Other things are possible (e.g. each entry being ( . )). I don't mean to suggest, however, that we should stop discussing what macro we want installed by default on LAMBDA.  Date: 29 September 1981 1728-EDT (Tuesday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: P.S. to previous message (LAMBDA list counter-proposal) Message-Id: <29Sep81 172833 GS70@CMU-10A> I forgot to mention that the proposed syntax can also be introduced in an upward compatible manner. If you encounter either a list or a non-nul atomic tail before encountering an &-keyword, then it is this new syntax, and otherwise is &-syntax. --Guy  Date: 29 September 1981 1724-EDT (Tuesday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: LAMBDA syntax counter-proposal Message-Id: <29Sep81 172446 GS70@CMU-10A> Here is another idea for a more "Lispy" LAMBDA-list syntax that some people here at CMU came up with. It is simpler to parse than &-syntax, but has all the power of &optional, &rest, and &aux. Moreover, it is more concise. It does not extend to other kinds of declaration. [0] A LAMBDA list is a (possibly dotted) list of specifiers, one for each non-rest parameter. [1] An atomic specifier names a required parameter. [2] A non-atomic specifier is of the form ( ). The second and third items may be omitted. This describes an optional parameter. [3] A non-() tail must be a symbol. It names a rest parameter. [4] (Optional rule.) All atomic specifiers must precede all non-atomic ones. [5] That's all. &aux variables are handled by the incredibly subtle dodge of using an embedded PROG or LET. Other declarations are handled another way, such as by local DECLARE forms. Here are some comparative examples of the use of this syntax. (DEFUN STRING-POSITION (CH STR (START 0) (END (STRING-LENGTH STR))) (DO ((J START (+ J 1))) ((= J END) ()) (WHEN (CHAR-EQUAL CH (CHAR STR J)) (RETURN J)))) (DEFUN STRING-POSITION (CH STR &OPTIONAL (START 0) (END (STRING-LENGTH STR))) (DO ((J START (+ J 1))) ((= J END) ()) (WHEN (CHAR-EQUAL CH (CHAR STR J)) (RETURN J)))) (DEFUN ASET (NEWVAL ARRAY . SUBSCRIPTS) (LET ((N (ARRAY-RANK ARRAY))) (DO ((J 0 (+ J 1)) (S SUBSCRIPTS (CDR S)) (LIN 0 (+ (* LIN (ARRAY-DIMENSION ARRAY J)) (CAR S)))) ((= J N) (IF (NULL S) (%LINEAR-ASET ARRAY LIN NEWVAL) (TOO-MANY-SUBSCRIPTS))) (IF (NULL S) (TOO-FEW-SUBSCRIPTS))))) (DEFUN ASET (NEWVAL ARRAY &REST SUBSCRIPTS &AUX (N (ARRAY-RANK ARRAY))) (DO ((J 0 (+ J 1)) (S SUBSCRIPTS (CDR S)) (LIN 0 (+ (* LIN (ARRAY-DIMENSION ARRAY J)) (CAR S)))) ((= J N) (IF (NULL S) (%LINEAR-ASET ARRAY LIN NEWVAL) (TOO-MANY-SUBSCRIPTS))) (IF (NULL S) (TOO-FEW-SUBSCRIPTS)))) This representation is very easy to parse, because each LAMBDA-list element corresponds to exactly one parameter. It is also always more concise, with two exceptions. The word &optional is simply omitted, and &rest is replaced by a dot. The word &aux is replaced by "(let (" and two close parentheses, and so is longer by three printing characters plus a bit of whitespace (excption #1). If there are many &optional parameters all defaulting to (), then "&optional x y z ..." must become "(x) (y) (z) ..."; this is less concise if there are more than five such parameters (exception #2). Omitting optional rule [4] permits optional parameters to precede required parameters. This has some limited usefulness: (DEFUN LOG ((BASE 10) NUM) (/ (LN NUM) (LN BASE))) (LOG 2) => 0.301... ;Log base 10 of 2 (LOG 16 2) => 0.25 ;Log base 16 of 2 (DEFUN TURBOPROP (SYM (NEWVAL () NEWP) PROP) ;Can do either GET or PUTPROP (IF NEWP (PUTPROP SYM NEWVAL PROP) (GET SYM PROP))) This idea has been mentioned before, of course. It has the disadvantage of requiring two-pass processing when binding parameters to arguments. I'd be inclined to retain rule [4]. --Guy  Date: 29 September 1981 09:56-EDT From: George J. Carrette Subject: About &parsers To: KMP at MIT-MC cc: ALAN at MIT-MC, LISP-FORUM at MIT-MC A *heavy* advantage of having extensions to the DEFUN syntax macroexpand into the most primitive DEFUN or LAMBDA syntax is that at least then the behavior would be WELL-DEFINED. The present situation on the LISP-MACHINE with different semantics and binding "times" in the compiler vs. interpreter, and especially the dependency of binding "times" on the complexity of the forms being bound is truly disgusting. The Maclisp extended DEFUN syntax was well-defined in its very early forms, but lost this definiteness for the sake of optimizations. If we give up the formal nature of the lisp language, leaving a wake of ill-defined constructs, then I think we give up the game itself, lisp relegated to being a hackers-only language.  Date: 28 Sep 1981 09:51 PDT From: Deutsch at PARC-MAXC Subject: Suggested new lambda-list syntax To: LISP-FORUM at MIT-AI I would like to cast my vote in favor of the new proposal. Keeping Lisp programs readable by programs is, in my opinion, at least as important as having them be readable by people. Both for this reason, and for my own personal preference, I prefer a syntax where semantic attributes are given a complete space of their own, rather than being carved out of the space of variable names by a lexical convention or table lookup.  Date: 28 September 1981 09:07-EDT From: Mark L. Miller Subject: Suggested new lambda-list syntax To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI I prefer this proposal to the existing situation. Regards, Mark  Date: 28 September 1981 09:05-EDT From: Mark L. Miller Subject: Re: Suggested new lambda-list syntax To: Scott.Fahlman at CMU-10A cc: RMS at MIT-AI, LISP-FORUM at MIT-AI I would like to cast one small vote in favor of keeping the syntax of LISP minimal. All the arguments about readability of noise words, reserved words, unreadability of parens, etc., sound like they are coming from the PASCAL world. I can't believe I'm reading it. I'd prefer the old days, when LISP code was so straightforward that even a computer program could easily read it. Regards, Mark  RWG@MIT-MC 09/28/81 05:01:03 Re: fast gcd To: lisp-forum at MIT-AI One can combine the single-precision idea in Knuth's (actually Lehmer's) Algorithm L with Silver's (rightshift) binary algorithm. I put this in MACLISP several years ago, and I think it's mentioned in Vol 2, 2nd ed. Also, I think (ALAN@MC) Bawden ucoded it on the Lispm. I suspect that "Lehmerizing" Brent's(?) LEFTshift binary GCD would be even better, but, no matter what binary algorithm is used, it is best to revert to a full multiprecision remainder step whenever the two integers are disparate by several wordlengths, assuming that your REMAINDER function has a faster inner loop than your GCD. The problem with denominator blowup is usually due to using rationals for approximate quantities. You should only do this if there is something wrong with your floats, e.g. too slow, or too imprecise. I think the main application for rationals is for precise quantities, e.g. (loop for i from 1/2 by 1/3 to 13/6 ...) with no worry about missing the endpoint. Or try inverting 1/89 1/144 1/233 1/144 1/233 1/377 1/233 1/377 1/610 . You'll get denominator blowup, all right, but that's exactly what you want. If you float the 1/89 = .01123..., you have already lost! MACSYMA has an extremely useful variant of gcd (which, by the way, takes any positive number of arguments, as any good gcd (or lcm!) should) which returns as extra values the arguments with the gcd divided out.  Date: 28 September 1981 02:01 edt From: HGBaker.Symbolics at MIT-Multics Subject: rational arithmetic To: bak at MIT-AI, lisp-forum at MIT-AI, deutsch at PARC-Maxc, rz at MIT-MC, jlk at MIT-MC I don't know the details of the current implementations of bignums or rationals on either the Lispm or Smalltalk-80. However, both groups might check Knuth Vol. II regarding rational arithmetic implementations, as it has some good suggestions. 1. GCD of large integers can be "approximated" with single precision calculations (see Alg. L), thus reducing the cost of this operation. Single precision doesn't have to be full word if you don't want. GCD can also be speeded up recursively using this hack, I believe. 2. Producing fractions which are in lowest terms is often easier if you know that all inputs are already in lowest terms (very much like normalization!). You don't necessarily have to do a complete GCD at the end. 3. GCD of smaller numbers is easier than GCD of larger numbers because GCD is O(n^2), for usual implementations. Therefore, several small GCD's can take less time than one large one. 4. GCD is 1 for randomly chosen numbers 61 percent of the time. Therefore, special casing is important. 5. rational operations tend to blow up like crazy, so one's first impression is to throw up one's hands and not do GCD's at all. However, people tend to do rational problems that have simple answers, in which case keeping lowest terms helps a lot. 6. My intuition tells me that you don't want to put off GCD'ing more than once (i.e. do it every other operation), else you waste more time multiplying, CONSing and garbage collecting than you do GCD'ing. 7. GCD should be able to be highly optimized in the microcode, at least for the single precision stuff, and so it is probably worth doing the GCD if it keeps the numbers single precision. 8. What is the Macsyma experience with rationals? Or do they treat them as expressions of integers and go through the standard simplifier?  Date: 27 September 1981 21:47-EDT From: Daniel L. Weinreb Subject: implementation of rational numbers To: BAK at MIT-AI, LISP-FORUM at MIT-AI Since your suggestion only affects the internal implementation of rationals and is not visible at the language definition level, it should be easy to play with once rationals are implemented. It should be noted that (presumably) it is defined that the printed representation of a rational always appears as if the rational were in lowest terms; writing the printer in the obvious way (using NUMERATOR and DENOMINATOR) would correctly produce this behavior anyway.  Date: 27 September 1981 21:28-EDT From: Daniel L. Weinreb Subject: About &parsers To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC It might interest you to know that my new Lisp compiler for the L machine has such a function; it is called decode-&keyword-list. I use it in many places in the compiler and it is a great win. Maybe this should either be part of the language or be part of a standard library or something.  Date: 27 Sep 1981 13:15 PDT From: Deutsch at PARC-MAXC Subject: Re: implementation of rational numbers In-reply-to: BAK's message of 27 September 1981 06:39-EDT To: William A. Kornfeld cc: LISP-FORUM at MIT-AI Smalltalk-80 has both bignums and rationals; our implementation of rational numbers converts to lowest terms after every operation. Currently we just do the operation in the straightforward way and apply Silver's gcd algorithm to the result, although of course there are better ways than this to implement the elementary arithmetic operations. We don't have much experience with rationals, so anything you learn would probably interest us. It would be very easy for us to switch over to the "lazy reduction" method you suggest, if that turned out to be advantageous.  Date: 27 September 1981 06:39-EDT From: William A. Kornfeld Subject: implementation of rational numbers To: LISP-FORUM at MIT-AI The following ideas have occurred to me concerning the implementation of rational numbers. Since there is at least one (and possibly more) implementations of rational numbers happening, I'd like to have them discussed. 1. Two functions, NUMERATOR and DENOMINATOR be available that return the respective fixnums or bignums as if the rational were in lowest terms. The NUMERATOR function on an integer returns the integer and the DENOMINATOR function called on the integer returns 1. 2. There is a bit associated with each rational number that says whether or not it is known to be in lowest terms. 3. When an arithmetic operation occurs that yields a rational as a result, the resulting rational is NOT put into lowest terms [although see 5 below] 4. The functions NUMERATOR and DENOMINATOR force conversion to lowest terms (if not already there) and set the bit. 5. It is possible, for overall efficiency reasons, that if the result yields a rational where either the numerator or denominator turns out to be a bignum, a conversion to lowest terms should occur before the rational is returned (setting the bit). How these issues are handled can have important efficiency ramifications for some programs. Two other possible implementations are: (a) ALWAYS CONVERT TO LOWEST TERMS AFTER EVERY ARITHMETIC OPERATION. This would increase the time to do arithmetic operations on rationals considerably. (b) NEVER CONVERT TO LOWEST TERMS. This would make frequent calls to NUMERATOR or DENOMINATOR more expensive than necessary and, for certain programs, lead to outrageously big bignums that could have been converted to a more compact form.  Date: 26 September 1981 15:17-EDT From: Kent M. Pitman Subject: About &parsers To: ALAN at MIT-MC cc: LISP-FORUM at MIT-MC I agree with your sentiments about lambda lists becoming too complex. I have no objection to (defun f (x y z) (let (a b c) ..)) instead of (defun f (x y z &aux a b c) ...) nor to (defun f (stuff) (dlet ((a b c) stuff) ...)) instead of (defun f ((a b c)) ...) As everyone is probably aware, I think &keywords lose. I think that (defun f (x (optional a b c) (rest d)) ...) and related variants lose on similar grounds in spite of their seeming lispiness. Anything short of (defun f (x (optional a) (optional b) (optional c) (rest d)) ...) will not be a notable improvement for programs that have to do code walking. HOWEVER, I'll not suggest anything along those lines. I know there is a camp of people with a valid gripe that such is overly verbose. I just wanted to make it clear this is not an attempt at taking a radical personal stand -- it is an attempt at compromise. The proposal: I think all systems need to provide at the bare minimum, a parser of the nature ALAN alluded to. Perhaps something like... (PARSE-&KEYWORDED-BVL '(X "E &OPTIONAL (A 3) &EVAL (B) &REST Z)) => ((X (REQUIRED) ) (A (OPTIONAL 3) QUOTE) (B (OPTIONAL) ) (Z (REST) )) or ANY suitably chosen canonical form. This example is only illustrative of an idea, not of an implementation. Such a program would want to be maintained by the system programmers since when the surface syntax changed, it would need to be updated. If a DEF-&KEYWORD were provided, it would have to interface to the parser. Perhaps even a WALK-&KEYWORDED-BVL which walked over useful kinds of nodes (eg, the init specs) of a node, calling a function at each such node with info about what variables were bound so far, etc. Accumulating the return values in interesting ways, and finally returning information about those return values and a simple list of the variables which became bound... I can be much more explicit about such a function, but it's not terribly important right here, so I'll assume this gets the general idea across. My point is, though, that the naive user should not have to be able to understand how to write a parser just to be able to write code that manipulates lambdas. I think a system-provided parser and or code-walker facility of some sort is essential to avoid needless duplication of effort and possibility of bug introduction due to incomplete understanding of the funny &syntax. Less essential, but a useful notion to consider, would be to retain DEFUN as are, but to make them macroexpand into something more explicit which the user does not see. This is probably not practical in the current lispm situation since LAMBDA itself can have these keywords, not just DEFUN. Nevertheless, I point it out because traditionally, the nice thing about Lisp was that funny surface forms had elegant and trivially decipherable underlying representations. Eg, 'A looks like something you'd feed to a parser, but underneath the (QUOTE A) is consistent with other stuff. Even, `(A ,B ,C) which is icky to many people has a well-formed underlying representation of (LIST 'A B C). That regularity is what made it so easy to get people to accept the ` syntactic sugar -- because the sugar was invisible to many programs that wanted to be treating a simpler model of the world. If (SI:|`| ...) really resisted macro- expansion attempts and code-walkers had to be constantly on guard for such, I suspect resistance would have been higher. Ditto even for things like LOOP which for all its bad points at least expands into something tractable by pre-existing code-walkers. Hence, the idea of having &keywords exist primarily at the visual level and then macroexpand out to a more robust underlying representation is -- to me -- at least conceptually the right mechanism. Perhaps it is too late to think about that approach tho'... I don't know. Is there any discussion on the idea of formalizing a parser for general use? ... or even several flavors of parsers (pardon my loose use of ice cream terminology) for such lists? What sorts of properties would be desirable, etc? -kmp  Date: 26 September 1981 03:05-EDT From: Alan Bawden Subject: &NO To: LISP-FORUM at MIT-MC This is my vote against the proposed lambda-list syntax. I don't see where it offers a substantial enough improvement in anything to make it worth our while to re-write all the existing &mumble parsers. I don't buy the argument that extending the syntactic space so as to allow for everybody's flavor of destructuring is desireable. This moves me straight into the destructuring-doesn't-belong-in-lambda-lists camp. While before I was willing to select one flavor of destructuring for lambda-lists, overloading lambda-lists with ALL kinds of destructuring seems absurd to me. I suppose the ultimate outcome of this is to allow the user to define his own lambda-list keywords: (def-lambda-list-keyword (one-or-more-of x) ...) would be the way to extend lambda in the direction of LSB...  Date: 26 September 1981 02:12-EDT From: David A. Moon Subject: Proposed new lambda-list syntax To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Date: 25 September 1981 22:06-EDT From: Richard M. Stallman To: LISP-FORUM at MIT-AI My proposed lambda-list syntax does apply sensibly to optional, key, rest and aux arguments, and grinds much better than the existing syntax does. How it grinds doesn't seem relevant since anything grinds well if the grinder knows how to grind it. I realized since I made the comment that the above paragraph is in reply to that it's not so bad for &optional and so forth as I thought, since you allow the syntax (optional a b) rather than (optional a) (optional b) which was the impression I had gotten from your original message. It is not really more complicated either. It replaces one "&" with two parentheses. This probably makes it easier to type on a Lisp machine keyboard, since "&" is a shifted character and parentheses are not. This is not really true, since one must say "default" which was not necessary before. Also you haven't explained how one would initialize aux variables. When you do, bear in mind that currently the syntax for this is compatible with LET and PROG. I don't think you ever explained what this new syntax was for, other than saying it is more "lispy", which really doesn't mean anything. Is it intended to allow more syntactic space for adding some new feature?  Date: 25 September 1981 22:06-EDT From: Richard M. Stallman To: LISP-FORUM at MIT-AI My proposed lambda-list syntax does apply sensibly to optional, key, rest and aux arguments, and grinds much better than the existing syntax does. It is not really more complicated either. It replaces one "&" with two parentheses. This probably makes it easier to type on a Lisp machine keyboard, since "&" is a shifted character and parentheses are not. I think that the example in my previous message demonstrates how clean the syntax is when used with optional, rest and key args. Anyone who thinks it is complicated, inapplicable, or unclear, ought to attempt to explain why.  Date: 25 September 1981 20:06-EDT From: Jon L White Subject: Suggested new lambda-list syntax To: rms at MIT-AI cc: MOON at MIT-MC, LISP-FORUM at MIT-MC From the very beginning, there have been several people around here who never really accepted the &-keyword format for lambda lists -- KMP in particular has long been agitating for the list-like format which you have just suggested. I more-or-less agree with MOON's comments reproduced below about the importance of keeping the present &OPTIONAL, &REST, and &AUX; since these are the most common cases, I'd hate to see their usage haired up even one iota. And I tend to agree with Fahlman that the proposed *format* is not succinct enough to be a "real winner" (whereas &OPTIONAL, &REST and &AUX are); but at least in the rare cases where such additional generality is desired by a programmer, his requirements should be met in an upwards-compatible way. However, I will say in defense of your proposal that it appears to be fully upwards compatible, so that we needn't flush the existing syntax, at least for the "common" cases, in order to add the more general mechanisms. Date: 23 September 1981 20:28-EDT From: David A. Moon I think &OPTIONAL, &REST, &AUX, and &KEY are quite different from the other lambda-list keywords. Those four keywords delimit boundaries between various sublists, which could have been written as separate lists except that that would be a mess. Your proposal doesn't really apply sensibly to those four keywords. For other ones like "E, &SPECIAL, etc. it would be an improvement. A separate point: I don't think it makes sense to complicate the syntax in order to get rid of the reserved words. Reserved words in this context cause no harmful effects, whereas a syntax with more parentheses and more noise words would decrease readability, and in a very central, heavily-used part of the language too. One other interesting benefit, should your proposal go thru, is that destructuring could be put into the lambda syntax without needing to decide whether one wants the data-directed destructuring, or the program directed one -- there is plenty of room in the keyword space to select either one. Currently, MacLISP and NIL don't really have destructuring for LAMBDAs (only for LET), and the LISPM has no destructuring yet; so there would be no incompatibility problem. The MacLISP/NIL LET could then produce an apropriate flavor of destructuring lambda, and an alternative let, say PLET (for Program-directed-destructuring-LET) would produce the other flavor.  Date: 25 September 1981 20:06-EDT From: Jon L White Subject: Suggested new lambda-list syntax To: rms at MIT-AI cc: MOON at MIT-MC, LISP-FORUM at MIT-MC From the very beginning, there have been several people around here who never really accepted the &-keyword format for lambda lists -- KMP in particular has long been agitating for the list-like format which you have just suggested. I more-or-less agree with MOON's comments reproduced below about the importance of keeping the present &OPTIONAL, &REST, and &AUX; since these are the most common cases, I'd hate to see their usage haired up even one iota. And I tend to agree with Fahlman that the proposed *format* is not succinct enough to be a "real winner" (whereas &OPTIONAL, &REST and &AUX are); but at least in the rare cases where such additional generality is desired by a programmer, his requirements should be met in an upwards-compatible way. However, I will say in defense of your proposal that it appears to be fully upwards compatible, so that we needn't flush the existing syntax, at least for the "common" cases, in order to add the more general mechanisms. Date: 23 September 1981 20:28-EDT From: David A. Moon I think &OPTIONAL, &REST, &AUX, and &KEY are quite different from the other lambda-list keywords. Those four keywords delimit boundaries between various sublists, which could have been written as separate lists except that that would be a mess. Your proposal doesn't really apply sensibly to those four keywords. For other ones like "E, &SPECIAL, etc. it would be an improvement. A separate point: I don't think it makes sense to complicate the syntax in order to get rid of the reserved words. Reserved words in this context cause no harmful effects, whereas a syntax with more parentheses and more noise words would decrease readability, and in a very central, heavily-used part of the language too. One other interesting benefit, should your proposal go thru, is that destructuring could be put into the lambda syntax without needing to decide whether one wants the data-directed destructuring, or the program directed one -- there is plenty of room in the keyword space to select either one. Currently, MacLISP and NIL don't really have destructuring for LAMBDAs (only for LET), and the LISPM has no destructuring yet; so there would be no incompatibility problem. The MacLISP/NIL LET could then produce an apropriate flavor of destructuring lambda, and an alternative let, say PLET (for Program-directed-destructuring-LET) would produce the other flavor.  Date: 24 September 1981 04:17-EDT From: Richard M. Stallman Subject: Lists for REST, OPTIONAL and AUX look nice To: LISP-FORUM at MIT-AI I agree that &OPTIONAL, &REST, &KEY and &AUX are different from the other lambda-list keywords, but it does not follow that a Lispy syntax for them is ugly. This example (defun foo (a b c d (optional x y z (default xxx t) (default yyy 69)) (rest rest-arg) (key parm1 parm2 (default parm3 t) parm4) (aux tem1 tem2)) ...) seems clear enough.  Date: 23 September 1981 20:28-EDT From: David A. Moon Subject: Suggested new lambda-list syntax To: LISP-FORUM at MIT-MC cc: RMS at MIT-AI I think &OPTIONAL, &REST, &AUX, and &KEY are quite different from the other lambda-list keywords. Those four keywords delimit boundaries between various sublists, which could have been written as separate lists except that that would be a mess. Your proposal doesn't really apply sensibly to those four keywords. For other ones like "E, &SPECIAL, etc. it would be an improvement. A separate point: I don't think it makes sense to complicate the syntax in order to get rid of the reserved words. Reserved words in this context cause no harmful effects, whereas a syntax with more parentheses and more noise words would decrease readability, and in a very central, heavily-used part of the language too. The reserved words could be :-prefixed rather than &-prefixed except that :-prefixed keywords can sometimes be EQ to ordinary symbols, which -would- make the reserved words difficult to live with.  Date: 22 September 1981 09:28-EDT From: Bill Long Subject: Lambda List Syntax To: LISP-FORUM at MIT-ML The list headed by keyword syntax is very reasonable and well tested in the LSB community. I like it and haven't heard any complaints from others. If you are going that far, why not adopt the LSB call syntax? The main differences would seem to be whether or not the function name should be the head of the list (call format vs argument list format) and whether aux variables should be in the arg list or separately declared. Even if those differences remained, it would be a nice step toward uniformity (for ease of remembering) if the rest of the syntax were the same. Are there compelling reasons why this can not be so? -Bill Long  Date: 22 September 1981 0002-EDT (Tuesday) From: Scott.Fahlman at CMU-10A To: Richard M. Stallman Subject: Re: Suggested new lambda-list syntax CC: lisp-forum at mit-ai In-Reply-To: Richard M. Stallman's message of 21 Sep 81 17:37-EST Message-Id: <22Sep81 000209 SF50@CMU-10A> I am certainly no fan of &optional and friends, but I like the proposed new syntax even less. My concern is that lambda lists of any complexity will no longer be human-readable without six colored pens and an abacus for counting parens. The only hope is to develop some sort of pretty-printing format so that the scopes of the various options becomes evident upon casual inspection. I don't think that normal pretty printing will quite get the job done. I may be wrong about this, though, so I would second GJC's motion that you code up some functions in this style and let us see what they look like. -- Scott  Date: 21 September 1981 23:04-EDT From: George J. Carrette Subject: Suggested new lambda-list syntax To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI Implement it and show some code written in the style, thats the only way to know for sure.  Date: 21 September 1981 18:37-EDT From: Richard M. Stallman Subject: Suggested new lambda-list syntax To: LISP-FORUM at MIT-AI I'm thinking of implementing the following new syntax for argument lists, which I think is more Lispy than the existing one. Anything in the argument list that is a symbol is the name of an argument. Anything that is a list has a car which is a keyword. Keywords such as OPTIONAL, REST, AUX, KEY, QUOTE are followed by any number of arguments (symbols or lists). A default value is specified by the keyword DEFAULT, as in (DEFAULT varname value specified-flag). These constructs can be nested in any way that makes sense. Arglists such as (X (REST Y Z) A (OPTIONAL A B)) which are nonsensical would cause error messages. This new format facilitates extensions, such as (DATA-TYPE arg type), (SPECIAL X Y (DEFAULT Z T)), or (LIST A B C) to do destructuring. (LIST A (OPTIONAL B)) is also possible, leading to an interesting idea: lambda-lists and SETF-able expressions can be generalized to be the same kind of thing. It is possible to support both this syntax and the existing one because in the existing one it is illegal to have a list appear unless &OPTIONAL, &KEY or &AUX has appeared first. But ideally I think it would be good to flush the existing one if it is ever practical to do so. Then there would be no reserved words in lambda lists. Any comments? Should the keywords have colons? Be global like the existing keywords?  Date: 19 Sep 1981 (Saturday) 1753-EDT From: PLATTS at WHARTON-10 (Steve Platt) Subject: Lisp for PR1ME needed To: bboards at MIT-AI, lisp-forum at MIT-AI Is there anywhere available a (decent) LISP available for a PR1ME 750? Source would be quite desirable, but not absolutely necessary. This is for an educational institution. Please reply to PLATTS@WHARTON. Thanks. -Steve Platt  Date: 7 September 1981 03:11-EDT From: Richard M. Stallman Sender: RMS0 at MIT-AI Subject: Lexical variables To: LISP-FORUM at MIT-AI, INFO-LISPM at MIT-AI I've just finished implementing a feature I call LEXICAL-CLOSURE which makes nonspecial variables into downward-only lexical variables: (defun foo (a) (mem (lexical-closure '(lambda (pat elt)...)) ...)) allows the inner lambda to access the variable a even though not special. So far it only works in compiled code, and it isn't installed. I have two questions I want input on: 1) should it be necessary to write LEXICAL-CLOSURE explicitly to have this feature? Perhaps writing an unquoted lambda-expression should do this? Perhaps (FUNCTION (LAMBDA ...)) should do this? 2) Should I change the interpreter so that variables behave always as they do in the compiler? That is, if not declared special, they would be completely local, unless the lexical-closure mechanism (under whatever user interface is used) makes them visible to an internal lambda. Note that expressions such as ((lambda ...) args) are compiled open by all` lisp compilers I know of, so that explicit lexical-closure is not needed in that case, and won't be in the future either. This makes me feel that it would be inconsistent to reqire an explicit lexical-closure in any other case just to get the same behavior.  Date: 3 September 1981 13:42-EDT From: David Chapman To: LISP-FORUM at MIT-AI Practically every user utility file has an improved gensym that uses a readable prefix. Trying to read macrocode is rendered painful by G0023's, and the output of once-only is something else. Whatever is done, at least the symbols should be human-readable.  Date: 2 September 1981 2346-EDT (Wednesday) From: David.Dill at CMU-10A (L170DD60) To: lisp-forum at mit-mc Subject: GEN(whatever) Message-Id: <02Sep81 234600 DD60@CMU-10A> If the reason for generating unique pnames is to avoid having the reader intern symbols that were uninterned before the printer printed them, I wonder if a better approach would be to have a uninterned symbols marked in some way by the printer, so that read would know not to intern them. This could be an extension to the syntax for package names, with a particular package name meaning that the symbol is not to be interned. Princ could print the symbol without the "don't intern this" marker. -David Dill  Date: 2 September 1981 23:54-EDT From: Alan Bawden Subject: GENSYM/GENTEMP To: KMP at MIT-MC, RWK at MIT-MC cc: LISP-FORUM at MIT-MC Hold on a second. At least on proposed solution to this problem doesn't involve any noticeable changes in GENSYM at all. That is the one where we fix FASDUMP/FASLOAD to respect the non-interned nature of gensyms. Now it may be that this is hard in MacLisp for some reason, but I haven't heard that reason yet. It also might be unacceptable for some reason, but the only objection I have heard so far is that you can fix FASDUMP/FASLOAD but you cannot fix READ/PRINT which doesn't strike me as a signifigant argument given the nature of the problem. To carry out this plan involves: 1) Extending FASL file format to cover non-interned symbols. I don't know if FASL file format is "used up" in any way that would prevent this. 2) Providing some easy way to recognize a gensym at FASDUMP time. (On the Lisp Machine this is easy because of the package cell.) Looking in the obarray for each symbol dumped might be acceptable, but it might be incorrect if the user is doing something funny with obarrays. (It also might be slow? Takes no longer than it does to intern... Note that you can compute the sxhash of a gensym reliably quickly!). Perhaps there is some clever way to have gensym "brand" each symbol it creates? A magic property is out due to reasons already noted by KMP, but perhaps some other hack? (Any symbol with a one-fixnum pname is a good candidate to be a gensym!) Perhaps whatever gensym does along these lines maknam should do too? Before we go off extending the language some more, I would like to hear some reasons why this cannot be treated as a (fixable) bug in the current language.  Date: 2 September 1981 22:33-EDT From: Robert W. Kerns Subject: GENSYM in MacLisp To: KMP at MIT-MC cc: LISP-FORUM at MIT-MC Well, there's no reason the improved GENSYM couldn't be overloaded if you want it, and be loaded by default in the compiler, but leave the built-in one the same. But I certainly don't mind it being called something else.  Date: 2 September 1981 18:34-EDT From: Kent M. Pitman Subject: GENSYM/GENTEMP To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC Please do not think of changing GENSYM, at least in Maclisp. There are systems (eg, semantic network systems) where people use GENSYM just to make things with property lists that do not print circularly. They never intend these objects to be printed so don't care about the extra hair of GENTEMP which would cost them precious address space to no advantage. I think the concept of a GENSYM that can survive PRINT-READ is a good one to think about, but let's not mix up the two ideas. Even in this day and age, memory conservation makes a difference.  Date: 2 September 1981 16:05-EDT From: Jon L White Subject: Why "uninterned" doesn't mean forever, or GENSYMs away! To: RMS at MIT-MC, MOON at MIT-MC, RWK at MIT-MC cc: LISP-FORUM at MIT-MC As RWK points out, my note entitled GENLOCAL entirely glossed over the problem that symbols may start out life *uninterned* in some lisp environment, but when PRINTed out or compiled/faslapped out to a file, they then lose their identity except for the pname. In the two paragraphs below, I indicate why the GENSYM/GENTEMP problem ought to be solved by guaranteeing truly unique pnames (indeed, InterLISP doesn't provide for "uninterned" symbols, so they ought to have something like this), and I mention an interesting sloution taken by LISP/370. The GENTEMP problem arises when code-writing programs are compiled, and subsequently loaded back in so that gensym'd variables, indistinguishable except by pname, are interned. The GENTEMP function cooperates with faslap to insure identity of a symbol, without intern'ing it. Unfortunately, this doesn't help code-writers which are merely PRINTed out, and then loaded back in as S-expression -- under such conditions, all symbols with the same pname will be identified, even though the original environment may have had hundreds of unique- but-same-pname'd symbols. A truly unique gensym name, perhaps one incorporating a location-date-time component of sufficiently fine resolution, would solve this. In fact, the current GENSYM could continue to be used where conflict of identity doesn't matter, and the "new, improved" version could be used when one might anticipate an "identity crisis" such as occurs with program names and program variables. LISP/370 had an interesting solution, which I'd like to relate for the discussion -- it introduced a new type of data, called say the "gensymbol", which could be used for function names or variables, but structurally was an "immediate" type of data (so no plist in the ordinary sense). These objects printed out with a distinct syntax, something like #G000035, so that within a given call to READ, each one which printed alike would be identified, but between calls to READ, they would not be so identified. What's even more interesting, when they are read back in, ** they do not retain the same pname as occuring in the source file **, even though they retain their "identity"! Thus, two S-expressions as successive calls to READ, say (A #G00235 B #G00235 #G00236) and (C #G00235 D #G00235 #G00236) would read in like, say, (A #G00001 B #G00001 #G00002) and (C #G00003 D #G00003 #G00004)  Date: 2 September 1981 06:15-EDT From: Robert W. Kerns Subject: My previous note To: LISP-FORUM at MIT-MC cc: JPG at MIT-MC Mentions "autospecialization" and (SYMBOLS T), where it actually meant (SPECIAL T). Sorry for any confusion.  Date: 2 September 1981 03:51-EDT From: David A. Moon Subject: GENSYM/GENTEMP/GENLOCAL To: RWK at MIT-MC cc: LISP-FORUM at MIT-MC I see, the actual bug is in faslap or fasload. Why not fix it there? The Lisp machine's fasdump/fasload currently do not preserve uninternedness of symbols; this could be fixed with a 1-line change and I'm reasonably confident that it would not break anything. It's probably reasonable to have a better version of GENSYM that makes more mnemonic (trackable to their source) names. I would be amenable either to changing the ridiculous thing that GENSYM does with its optional arguments now, or to adding one new function and regarding GENSYM as obsolete. Whatever is done to resolve this problem should be put into Common Lisp so the problem doesn't have to be solved again and again in the future.  Date: 2 September 1981 03:00-EDT From: Robert W. Kerns Subject: GENSYM/GENTEMP/GENLOCAL To: LISP-FORUM at MIT-MC ----- RWK@MIT-MC 09/02/81 02:53:18 Re: GENLOCAL To: MOON at MIT-MC CC: LISP-FORM at MIT-MC You have inadvertently hit the thumb square on the nail. The purpose of GENLOCAL primarily to have an uninterned symbol so you can avoid all sorts of problems. Indeed, I can't imagine any reason this functionality couldn't be given to GENSYM. The main improvement of GENLOCAL over GENSYM is the preservation of this important property when the symbol is written into a FASL/VASL/QFASL file and loaded back in again. A normal GENSYM written to a FASL file and read back in will be interned. Thus, if you save a code- fragment with a gensym (say, describing a structure reference) into a file, and later do the same with another file, there is a certain small but real probability that the two gensyms will have the same pname. If so, they may conflict with each other when you load the two files. I have seen exactly that happen a number of times. As I view it, this is just a better fulfilment of GENSYM's contract. Others might reasonbly argue that this is incompatible with GENSYM, I suppose. The bit about generating unique names is primarily an extention of the above argument to include uniqueness across print-into-file/read and other situations where INTERN is the only uniqification being done, unlike FASL files where a symbol is output only once. Of course, uniqueness of ex-gensyms is more important when your compiler has problems with nested declarations, so this problem was discovered in MACLISP first, but it can happen in any system. Another feature of GENLOCAL is that it takes an argument which is taken to provide the initial part of the generated symbol, to help readability of the result. I.e. (GENLOCAL "BODY") ==> BODY..259 . This is rather incompatible with the old definition of GENSYM, and far more useful. I never call GENSYM anymore, and can't see how changing GENSYM to be like GENLOCAL would screw anyone. But with any feature this old you never know what people may depend on. For those confused by prior discussion, the difference between GENTEMP and GENLOCAL is trivial. GENLOCAL in MacLisp would also arrange to inhibit the (DECLARE (SYMBOLS T)) switch in MACLISP, which is defined to auto-specialize all user-variables. Thus GENLOCAL's symbols will not be autospecialized. This is almost certainly irrelevent for other dialects.  Date: 2 September 1981 01:22-EDT From: David A. Moon Subject: GENLOCAL To: JONL at MIT-MC cc: RLB at MIT-MC, LISP-FORUM at MIT-MC What's the point of this? I.e., why not simply leave these symbols uninterned and avoid the problem entirely, rather than trying to make unique names?  Date: 1 September 1981 18:06-EDT From: Jon L White Subject: GENLOCAL To: RLB at MIT-MC, LISP-FORUM at MIT-MC Following proposal for a new (macro) function deserves wide circulation since it may need to establish a successor to the time-honored GENSYM function: Date: 31 August 1981 20:23-EDT From: Richard L. Bryan Why not introduce a FUNCTION called GENLOCAL which takes one &optional arg like GENTEMP and also does the putprop that SI:GEN-LOCAL-VAR does? . . . For the benefit of the LISP-FORUM subscribers who haven't yet become familiar with what we've been calling the GENTEMP problem, the following two paragraphs are lifted from some previous mail: Many system facilities automatically generate local variables, sometimes even with numeric declarations. If your code-writer (human possibly, but much more likely some automated system) happens to chose a symbol of the same pname, you will quite likely come into conflict with the scope of the declaration. This applies to any numeric declaration, whether local or global, and to the global (SPECIALS T) declaration. This problem was accurately diagnosed only relatively recently by RLB, RWK and myself [JonL]. The particular tactic taken in GENTEMP is to ensure a variable that isn't on the obarray. Another approach has been considered, that of providing truly unique names, perhaps incorporating some stamp of date-time-location long enough to insure that no two distinct calls to the future GENSYM could ever generate the same pname. Currently, that might require pnames of 12 or more characters, so this could be somewhat of a problem. Any possibility that LISPM, NIL, SPICE, FRANZ, etc can address this problem too? MacLISP questions: GENLOCAL sounds as good a name as any -- COMPLR has had its own version of this for some time (called LOCAL-VAR on the SOBARRAY). As I mentioned some time before, SI:GEN-LOCAL-VAR with () as first arg behaves just as you describe GENLOCAL; SI:GEN-LOCAL-VAR has been on the SI package until now because of its admittedly experimental and specialized nature. Since none of these GENTEMP things have been yet advertised, the question arises of where to put GENLOCAL -- how about UMLMAC or MLMAC? Another problem is arising now too, namely that there isn't enough space in the initial LISP to have any more initial autoloadable facilites. Possibly some of the lesser-used symbols could be flushed in favor of the newer -- such as DEFSTRUCT (and it's two or three auxillaries), GENTEMP, GENLOCAL, etc.  Date: 10 July 1981 17:51-EDT From: Jon L White Subject: MACROEXPAND and related functions To: LISP-FORUM at MIT-MC GJC and I have wondered how various facilities could convey to a macro function the information that the result was only going to be used "for effects"; one reasonable way is to let MACROEXPAND and any such related functions take an optional second argument which signals this condition. Any comments?  Date: 26 June 1981 1035-EDT (Friday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-AI Subject: Re: NCONC and LAST of locatives CC: gosper at PARC-MAXC In-Reply-To: Daniel L. Weinreb's message of 25 Jun 81 20:08-EST Message-Id: <26Jun81 103544 GS70@CMU-10A> Locatives are another example of a feature which seems simple because its implementation is "obvious", but which has unexpected repercussions in the language. Maybe it's a lot of work to figure out a consistent theory of locatives, but unless that is done it will continue to be just another of many ad-hoc kludges rather than part of a coherent language. --Guy  Date: 25 June 1981 21:08-EDT From: Daniel L. Weinreb Subject: NCONC and LAST of locatives To: LISP-FORUM at MIT-AI, GOSPER at PARC-MAXC While you may be right that the semantics of NCONC on locatives is not currently the most consistent thing, I think it might take a long time to work out a philosophy of what all the various list and tree hacking function should do when given locatives. It seems like a lot of work, and not very important...  Date: 25 Jun 1981 (Thursday) 1930-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) Subject: Opps -- that's LET (I've been told) -- "never mind" To: lisp-forum at MIT-AI  Date: 25 Jun 1981 (Thursday) 1736-EDT From: SHRAGE at WHARTON-10 (Jeffrey Shrager) Subject: Avoiding recomputation or recapitulation: (g x x) To: lisp-forum at MIT-AI How about a macro, WHERE, that had a form something like: (where ((x e1) (y e2) ...) expr) ==> ((lambda (x y ...) expr) e1 e2 ...) Then one could say : (where ( (x 5) (y 6) ) (+ x x (/ y x)) ) [or something like that]. No need for locals beyond the scope of the expression, no need for fifo register description, self-cleaning (via lambda), etc. You still have to give the args names but they are cleanly localized by lambda.  Date: 25 June 1981 13:08-EDT From: David Chapman Subject: more about SEQ To: DLW at MIT-AI, Guy.Steele at CMU-10A cc: BAK at MIT-AI, LISP-FORUM at MIT-AI, gosper at PARC-MAXC Actually, SEQ does not avoid introducing temporary names at all; it just defines a default temporary name * (an accumulator) and saves you having to think of a gensym. An alternative I tried as part of a since-flushed feature of the Programmer's Apprentice was to edit data-flow diagrams directly: (G x x) is just --- | x | -o- / \ --o-----o-- | G | ----------- Some other Programmer's Apprentice software would convert between this representation and lisp. Of course big box/arrow diagrams got unusable pretty fast.  Date: 25 June 1981 1144-EDT (Thursday) From: Guy.Steele at CMU-10A To: Daniel L. Weinreb Subject: LOGLAN CC: bak at MIT-AI, gosper at PARC-MAXC, lisp-forum at MIT-AI In-Reply-To: Daniel L. Weinreb's message of 25 Jun 81 01:52-EST Message-Id: <25Jun81 114413 GS70@CMU-10A> Actually, the pronouns in questions are DA, DE, DI, DO, DU; these are the "definite pronouns". There is a second series (BA, BE, etc.) which are indefinite (but existentially quantified, I think). If five aren't enough, I believe one can use subscripts: DA1, DA2, etc. (which are pronounced DACINE, DACITO, DACITE, ..., the numbers being NE, TO, TE, ... and CI being a subscript marker, pronounced SHI (Loglan C = English SH). The trouble is that these actually carry over from sentence to sentence, and there seems to be some difficulty with keeping track of the last five things, let alone the last two, as they slide by. There is also the question: if one uses a pronoun, does what it refers to get yanked back to the front of the cache? Even if not, if one uses DA twice in one sentence one would like it to keep meaning the same thing. This debate has been raging in LOGLAN circles for a while. Most languages make use of typed pronouns to help in this: for example, HE, SHE, and IT all mean roughly the same thing except for their data type; WE and THEY are similar in function but remind one of who is included (you and I being especially important compared to them!). Compare: "He gave it to she and she thanked he for it." "DA gave DI to DE and DE thanked DA for DI." (I left out case markers in the English for purposes of comparison.) The LOGLAN version might have been "DI gave DA to DE and DE thanked DI for DA." or any other permutation, depending on the order in which the three participants were recently mentioned. English lets one do semantic, not syntactic, matching of pronouns to nouns. Other things one does in English are abbreviated reference: "George borrowed his dad's newer yellow car [his dad owns five cars, and two are yellow]. When he returned the car, he said a girl in another car had waved to him. He had followed her, whereupon a man in the car [which car, now? you know] got out and sprayed whipped cream on his windshield [of which car???]..." and invented names: "George's cousin's dentist knows a man whose dog -- I don't know its name; let's call it Fred -- bit the milkman. He howled and kicked Fred so hard..." (This is a LET.) Scott Layson knows much more about LOGLAN than I do, if you want gory details. I believe he can speak it. If you really want to avoid LET, I suggest a combinator-type language. That has problems of its own, but no intermediate names. --Guy  Date: 25 June 1981 10:10-EDT From: David Chapman To: DLW at MIT-AI, BAK at MIT-AI cc: LISP-FORUM at MIT-AI, gosper at PARC-MAXC Interestingly, the da, de, di, do, du construction doesn't work very well in Loglan either. First, you only get to refer to five back. The big problem, though, is figuring out where that is in a complex sentence. People are real bad at keeping track of fifo lists in real time. To make matters worse, it isn't always clear how it ought to work in cases involving hierarchical phrases. "Then Jack said `You need oregano, parsley, chili, and chives in tomato slop,' and put the spoon in." Is two back (de) Jack or the tomato slop (or the whole quoted string)? Either way you lose access to the other. (Since Loglan has a hack to allow quotations from any random language, which may not follow queuing conventions, I think they ended up making the convention that quotations are opaque in getting the YACC parser to work.) The lispish equivalent of this is (G (F x) x) -- ie, you want to be able to refer to previous subexpressions' arguments as well as previous arguments of your own.  Date: 25 June 1981 02:52-EDT From: Daniel L. Weinreb To: BAK at MIT-AI cc: LISP-FORUM at MIT-AI, gosper at PARC-MAXC I don't think your macro deals with the problem at all, actually; the problem was what if you want to do (G x x), and you don't want to write and compute "x" twice. In your macro it looks like you just get to say "*" once, and other "*"s refer to later quantities rather than letting you get at the first one again. It is pretty easy in Lisp to just say (let ((quan
)) (g quan quan)) If you wanted something more concise, you would need some way of getting quantities named without specifically putting in a form that names them. LOGLAN (an artifical natural language, so to speak) has this hack where there are five "variables" that automatically get assigned, in order, to the first five things mentioned in a sentence; you can then refer to these by using pronouns (I belive the words used to reference them are LA, LE, LI, LO, and LU or something). This works because people break things up into sentences and they don't individually get too complicated. (Does this remind you of registers A, B, C, D, and E?) I don't see any good way to put this into a programming language, but maybe someone else does...  Date: 24 June 1981 16:51-EDT From: David A. Moon Subject: NCONC and LAST of locatives To: GOSPER at PARC-MAXC cc: LISP-FORUM at MIT-MC The only recommended procedure for using locatives to build up a list is the one in the example on page 157 of the red manual, i.e. rplacd the previous tail and then setq the tail variable to the next tail. Actually what you should do is use LOOP and let it write the code, which will be as efficient as anything you could write yourself and has the bonus of working in all implementations (it writes different code for list collection for Lisp machine, pdp10 Maclisp, Multics Maclisp, and VAX NIL).  Date: 24 June 1981 1441-EDT (Wednesday) From: Guy.Steele at CMU-10A To: Gosper at PARC-MAXC Subject: Antecedents CC: lisp-forum at MIT-AI Message-Id: <24Jun81 144152 GS70@CMU-10A> Bill, Granted that it's a pain to ahve to name trivial quantities you only want to use twice, nevertheless I dispute the claim that "LISP is guiltier than machine language" for the example described: Suppose (F a) is any function, (G a b) is a recursive function, and x stands for a large (or costly) expression. Now suppose you want (G x (F x))) (or just (G x x)). In machine language, (and presumably in the compiler's model), the second x is readily available in a register or arg vector or stack as part of the setup to call G, but in LISP there seems to be no way to exploit this, necessitating the invention of a temporary. ... Reservation: a numeric designator ("N args ago") is unflavorful on grounds of readability and editability, hatching gross bugs on failure to increment or decrement the Ns upon insertion or deletion of intervening args. As a rule, in machine language you must give *everything* a name (for the most part, what is a register but a pronoun, or at least a short name?); thus you don't notice it expecially when a trivial intermediate quantity gets a name also. For PDP-10 MacLISP: HRRZ A,@.SPECIAL FOO ;(BAR (CADR FOO) 'QUUX (CADR FOO)) HLRZ A,(A) MOVEI B,.QUOTE QUUX MOVEI C,(A) ;Copy first arg to third PUSHJ P,BAR Here we easily grabbed the first argument to copy into the third, but that's because it had the name A! (We gave the new one the name C -- or rather, the calling conventions *require* that we name the third argument C.) A partial exception is stack quantities, which don't have names, but those are then generally accessed by numeric designator. PUSH P,[RETADR] HRRZ A,@.SPECIAL FOO ;Same example, but BAR is LSUBR HLRZ A,(A) PUSH P,A PUSH P,[.QUOTE QUUX ] PUSH P,-1(P) ;Copy first arg to third MOVNI T,3 JRST BAR RETADR: Here we had to use a numeric offset to refer to the first argument. In summary, I suggest that LISP probably lets you elide more names than typical machine code, and lets you choose your own names for the rest, rather than requiring a fixed small set of register names. --Guy  Date: 24 June 1981 09:22-EDT From: George J. Carrette Subject: NCONC and LAST of locatives To: GOSPER at PARC-MAXC cc: lisp-forum at MIT-AI Many people never use RPLAC* or NCONC in macros. Much of the time macros can be written as destructure against a pattern, and then building up of something using BACKQUOTE. A syntactic transformation, and so a bit silly to use RPLAC* operations, especially if just for efficiency, since on the Lispm the macro-consing will be in a temporary area anyway, and in Maclisp garbage collection is so darn fast. I'll admit your example was a bit confusing, so I may have misread what you were trying to do.  Date: 24 Jun 1981 0415-EDT From: JoSH Subject: Re: Antecedent, what's thy name? To: GOSPER at PARC-MAXC, lisp-forum at MIT-AI In-Reply-To: Your message of 24-Jun-81 0338-EDT Two possibilities come to mind: marking the antecedent (G (this x) that) and matching for it (G (foo (bar ...)) (previous foo)) or possibly a combination (G (this (foo (bar ...))) (that foo)) ...but this hardly seems any better than (G (setq x (foo (bar ...))) x). What about a pronoun-word that refers to the position within the form containing the pronoun: (G x it-cadr) or better (G x g-cadr) ? A decent structure editor should be able to keep your references straight for you. -------  Date: 24 June 1981 04:03-EDT From: William A. Kornfeld To: gosper at PARC-MAXC cc: LISP-FORUM at MIT-AI I have a macro I use occasionally called SEQ (previously reported to this column) that addresses your problem. It was invented mostly because it is sometimes easier to read function nesting "backwards". For example: (SEQ (H x) (G * y) (F *)) expands into (F (G (H x) y)) If more than one `*' appears, its smart about inserting a gensym'd symbol so the thing doesnt get evaluated more than once, so (SEQ (G x) (F * y *)) expands into: (LET ((G0001 (G x))) (F G0001 y G0001)) I have found this inversion of functional notation to make things much more readable in many situations and solves at least some cases of your problem. [I actually use the "circle-plus" character rather than * to avoid clashes.]  Date: 24 JUN 1981 0038-PDT From: GOSPER at PARC-MAXC Subject: Antecedent, what's thy name? To: lisp-forum at AI One of the characteristic drawbacks of "low level languages" is the incessant need to invent names or places to temporarily put quantities that you had no intention of naming. Here's an case where LISP is guiltier than machine language. Suppose (F a) is any function, (G a b) is a recursive function, and x stands for a large (or costly) expression. Now suppose you want (G x (F x))) (or just (G x x)). In machine language, (and presumably in the compiler's model), the second x is readily available in a register or arg vector or stack as part of the setup to call G, but in LISP there seems to be no way to exploit this, necessitating the invention of a temporary. A clean pronoun mechanism permitting later arguments to reference their predecessors on the left would enhance clarity and efficiency, and resolve many parallel-serial assignment dilemmas in favor of parallel, by providing and alternate serial mechanism. Reservation: a numeric designator ("N args ago") is unflavorful on grounds of readability and editability, hatching gross bugs on failure to increment or decrement the Ns upon insertion or deletion of intervening args. Any suggestions? -------  Date: 23 JUN 1981 2351-PDT From: GOSPER at PARC-MAXC Subject: NCONC and LAST of locatives To: lisp-forum at AI Suppose a macro initializes the variable LET to '(LET () ...) preparing to NCONC zero or more lists of variable-names onto the list which is (CADR LET). One might try initializing POINTER to (LOCF (CADR LET)), and then for each list of new-names (NCONC POINTER new-names), on the theory that NCONC cdrs until the cdr is () and then RPLACDs. This currently fails on the the LispM because NCONCing the locative is apparently a no-op. Should it be? Recalling that RPLACD affects locatives, I tried (RPLACD (LAST ...) ...) in place of NCONC. This does the right thing the first time (thereby differing from NCONC!), because (LAST POINTER) was POINTER. Unfortunately, it remains thus for non-null (CDR POINTER). Admittedly, this is curable by the (probably more efficient anyway) stratagem of (RPLACD (LAST POINTER) (SETQ POINTER new-names)), but should this be necessary? -------  Date: 19 June 1981 00:44-EDT From: Richard M. Stallman Subject: nested function definitions interfering with extensibility. To: LISP-FORUM at MIT-AI Date: 18 June 1981 10:37-EDT From: George J. Carrette This certainly true in the case of program editing in a non-structured text-oriented way, but in a system with more cooperation between program editor and compiler this need not be the case. I'm all for editing programs as text, but I don't like representing them, storing them, as a sequence of characters, i.e. in a purely syntactic manner. Why? Because it severely limits program management to purely syntactical levels. The problem is not in editing at all. It is easy in any sort of editor to edit the internal function definition and make no change to the external one, and vice versa. The problem comes when you try to store a file which, when loaded, changes one but not the other. No editing is involved, just loading of source or compiled files. IRT Moon's suggestion: Function specs like (:INTERNAL FOO BAR) for BAR within FOO make it possible to redefine the internal function without changing the external one. This is a good solution for that half of the problem. But how do you redefine the external one without changing the internal one?  Date: 18 June 1981 20:47-EDT From: David A. Moon Subject: LABELS To: RMS at MIT-AI cc: LISP-FORUM at MIT-MC Date: 18 June 1981 05:44-EDT From: Richard M. Stallman Subject: Binding functions vs packages. To: LISP-FORUM at MIT-AI The use of syntactically nested functions is very bad for extensibility. If I have FOO which calls BAR, and BAR is defined separately, I can redefine either one separately. If FOO contains a LABELS which defines BAR, then I cannot redefine just one of them without redefining the other (trivially). I would assume that the name of the function defined in the labels for purposes of the outside world would be (:INTERNAL FOO BAR) rather than BAR, and redefining that name would work.  Date: 18 June 1981 1452-EDT (Thursday) From: Guy.Steele at CMU-10A To: Richard M. Stallman Subject: Re: Variable binding discipline CC: lisp-forum at MIT-MC In-Reply-To: Richard M. Stallman's message of 18 Jun 81 04:48-EST Message-Id: <18Jun81 145244 GS70@CMU-10A> (1) Sorry -- in my description of the proposed scheme I omitted the description of LOCAL-DECLARE, either out of brevity or laziness. Indeed, one could use LOCAL-DECLARE to override the default interpretation of an ordinary variable. (2) While LABELS *could* be bad for extensibility, it needn't necessarily be bad (let alone "very bad"). I agree that LABELS ought to be used sparingly. One of the reasons for putting it into a real language is to find out the extent of its usefulness (I think that it has *some* usefulness is already established). Often redefinability of functions is what you want. On the other hand, when I write REVERSE using REVERSE1, I don't particularly want REVERSE1 to be redefined. (Tracing is another matter.) --Guy  Date: 18 June 1981 10:37-EDT From: George J. Carrette Subject: Binding functions vs packages. To: RMS at MIT-AI cc: LISP-FORUM at MIT-AI From: Richard M. Stallman I hear that GJS and HAL have a plan for manipulating "lexical" environments in a fashion which is not really syntactically nested, and this may solve the problem. But it is not simply ordinary lexical binding of function names. Anything which is really syntactically local does not do the job. This certainly true in the case of program editing in a non-structured text-oriented way, but in a system with more cooperation between program editor and compiler this need not be the case. I'm all for editing programs as text, but I don't like representing them, storing them, as a sequence of characters, i.e. in a purely syntactic manner. Why? Because it severely limits program management to purely syntactical levels. -gjc  Date: 18 June 1981 05:48-EDT From: Richard M. Stallman Subject: Variable binding discipline To: LISP-FORUM at MIT-AI I don't think that whether a variable is special ought to be determined only by the innermost lexically visible binding. A LOCAL-DECLARE inside the innermost binding ought to override it.  Date: 18 June 1981 05:44-EDT From: Richard M. Stallman Subject: Binding functions vs packages. To: LISP-FORUM at MIT-AI The use of syntactically nested functions is very bad for extensibility. If I have FOO which calls BAR, and BAR is defined separately, I can redefine either one separately. If FOO contains a LABELS which defines BAR, then I cannot redefine just one of them without redefining the other (trivially). I hear that GJS and HAL have a plan for manipulating "lexical" environments in a fashion which is not really syntactically nested, and this may solve the problem. But it is not simply ordinary lexical binding of function names. Anything which is really syntactically local does not do the job.  RMS@MIT-AI 06/18/81 03:05:48 To: lisp-forum at MIT-MC Correction, SETF destructuring can avoid the screwy interaction with DEFSTRUCT default arguments, if the structure constructor has its own SETF property. If it were trying to work by letting itself be macroexpanded and letting the expansion be SETF'd, it would still have the problem.  RMS@MIT-AI 06/18/81 03:03:05 To: lisp-forum at MIT-MC I think that SETF destructuring is immune to the screwy interaction with DEFSTRUCTdefault arguments.  Date: 17 June 1981 13:15-EDT From: Jon L White To: KRONJ at MIT-MC cc: BUG-MACLISP at MIT-MC, GJC at MIT-MC, LISP-FORUM at MIT-MC Date: 16 June 1981 23:39-EDT From: David Eppstein Is there any way to get DEFVST-structures to print in a machine-readable format? Or does there exist somewhere a way to make the #-readmacro recognize #{STRUCTNAME FOO ...} type constructions? If you have the DEFVST file loaded into a lisp which has some structures created in it (even those that may have been loaded from FASL files before the DEFVST/EXTEND files were loaded), then the structures will print out nicely -- in fact, one could define a meaning for "#{", using DEFSHARP, to cause the printed form to read in into an EQUAL structure. But there has not been complete agreement on this feature, nor really strong interest, so no one has taken the liberty of adding it yet.  Date: 10 June 1981 22:39-EDT From: George J. Carrette Subject: Reply to DESTRUCTURING considered harmfull. To: DLW at MIT-AI cc: DILL at MIT-MC, LISP-FORUM at MIT-MC, Guy.Steele at CMU-10A I have a model that deals with the defaulting problem of (DSETQ #{FOO A: X B: Y} Z) that GLS mentioned. It's called "QUOTE considered harmfull," but is too long for even me to dare sending to lisp forum. -gjc  Date: 10 June 1981 22:24-EDT From: George J. Carrette Subject: Variable binding To: Guy.Steele at CMU-10A cc: LISP-FORUM at MIT-MC I think the message you sent just previously, about the "package system," is related to the issue of having procedure variables be lexically scoped. As you said, the way we use procedures now are as lexically scoped variables. I even enforce this view in the Macsyma->Lisp translator, e.g. "F(G):=H(1)+G(1)" "H(1)" has "H" lexical-global, while "G(1)" turns into "(FUNCALL G 1)." The Macsyma-User never knows the difference, in fact he doesn't really seem to think about it much. (Obviously this breaks down entirely in the cases where the Macsyma-System-Programmer has written an FSUBR that depends heavily on VARIABLES being SYMBOLS, etc.) Anyway, the claim is that if you have lexically scoped procedure variables you don't need a package-system. What you do need is seperate compilation of the elements of a LABELS statement, e.g. (DEFENV BAZ () ; define an environment (LABELS ((FOO "foo") ; the LAMBDA is in the file "foo" (BAR "bar")) ; the LAMBDA is in the file "bar" )) If you want to call the function FOO from outside of the environment BAZ then maybe you say ((BAZ FOO) 1 2 3). Define ":" as an infix operator and you can say (BAZ:FOO 1 2 3). I don't know, maybe this sounds too much like PL/1 on Multics. -gjc  Date: 10 June 1981 17:59-EDT From: Daniel L. Weinreb Subject: Reply to DESTRUCTURING considered harmfull. To: GJC at MIT-MC cc: DILL at MIT-MC, LISP-FORUM at MIT-MC Yeah, it's possible to get the abstract destructring, but only if the printed representation is what you are talking about, which it isn't, for good reasons.  Date: 10 June 1981 1753-EDT (Wednesday) From: Guy.Steele at CMU-10A To: lisp-forum at MIT-MC Subject: Reply to reply to DESTRUCTURING C.H. In-Reply-To: George J. Carrette's message of 10 Jun 81 15:54-EST Message-Id: <10Jun81 175306 GS70@CMU-10A> Unfortunately there is a bad interaction between destructuring of DEFSTRUCT structures and defaulting of components. Consider a slightly modified version of GJC's example: (DEFSTRUCT FOO A B (C 'TANG)) ;FOO is an astronaut (see Chineual) Now when one writes #{FOO A: 1 B: 2} I assume that means the same as #,(MAKE-FOO A '1 B '2), on the Aunt Agatha principle: I can't think of what else it might usefully mean. Assuming this to be true, then we find that writing (DSETQ #{FOO A: X B: Y} Z) is identical to writing (DSETQ #,(MAKE-FOO A 'X B 'Y) Z) which, because of the defaulting mechanism of DEFSTRUCT, is the same as (DSETQ #,(MAKE-FOO A 'X B 'Y C 'TANG) Z) which is the same as (DSETQ #{FOO A: X B: Y C: TANG} Z) because of the defaulting of the C slot, which happens at READ time when the structure is constructed. Therefore, like it or not, the variable TANG gets setq'd as well as X and Y! I don't think you can get rid of this effect without either making DSETQ not really data-directed, or throwing away the component-defaulting mechanism of DEFSTRUCT (which would be unaccetable, I think). --Guy  Date: 10 June 1981 16:54-EDT From: George J. Carrette Subject: Reply to DESTRUCTURING considered harmfull. To: DILL at MIT-MC cc: LISP-FORUM at MIT-MC The only place I have seen destructuring used is in the processing S-EXPRESSION's which have been produced by a READER or PARSER. The three most common applications being in lisp macros, various lisp-style assemblers, and the alpha-conversion (first-pass) of lisp compilers. For these purely syntactical uses the SYNTACTIC destructuring of DEFMACRO and [maclisp] LET is great, made-to-order one might say. The other kind of destructuring is SEMANTIC, e.g. in "(SETF X)" one must look at "" as a lisp program, and know about order-of-evaluation and other considerations. [Although the LispM design decision was to treat SETF purely syntactically for simplicity.] A problem with the semantic destructuring is that one can express arbitrary pattern-matching in it, and that is known to loose. * Let us look at the syntactic destructuring when one has a read syntax for structures. * (defstruct foo a b c) (make-foo a 1 b 2 c 3) => #{FOO A: 1 B: 2 C: 3} ; print SYNTAX Therefore it is reasonable to assume that (DSETQ #{FOO A: X B: Y} Z) ; syntactic destructuring SETQ will expand into (SETQ X (FOO-A Z) Y (FOO-B Z)). So you see that it is possible to get the destructured abstraction you spoke of within the purely syntactic realm of maclisp DSETQ and LET. Therefore fear not. -gjc  Date: 10 Jun 1981 1606-EDT From: DILL at CMU-20C Subject: Destructuring considered harmful To: lisp-forum at MIT-MC I think the use of destructuring as it currently exists represents a step backwards in making Lisp a serious language for programming non-toy systems, because it is actively hostile to abstracting away from representational decisions. In 90% of all cases, complicated list structures are REALLY being used to implement some sort of abstract type, or just to package up a bunch of related information that you want to pass around. The use of LET-style destructuring emphasizes the position of the data in particular structures, and de-emphasizes the meaning of the data. This is precisely the opposite of what really should happen. It binds programmers to using a particular representation ("I wish a could change the order of these two things, or put them in a vector, but I can't because every routine in the system KNOWS exactly what the guts of the structure look like"), and it makes it difficult to understand what is happening in terms of the abstraction being implemented (e.g., sub-trees of a binary tree are extracted by saying ((a . b) random-tree) instead of having the extractors "left-son" and "right-son" right there). Furthermore, because of its brevity and "convenience", this language feature will actively encourage programmers to do it the wrong way. I think that RMS-style destructuring will at least allow structures to be shredded according to their abstract structure. -David Dill -------  Date: 9 June 1981 11:22-EDT From: Lars S. Hornfeldt Sender: LSH0 at MIT-MC Subject: Intra-dialect Lisp translation To: LISP-FORUM at MIT-MC, finin at WHARTON-10 AVAILABLE: LISP 1.6 ==> MACLISP translator, in the form of an ITS-TECO MACRO by KMP and me, which is very thoroughly tested by repeatedly translating new versions of a 50K-package perfectly, and without need for the user to interact. WANTED: Translator MACLISP ==> Utah STANDARD LISP (possibly <== ) Preferably keeping comments in the output file (as the above MACRO does). The teco-macro contains a public, generally useful part, but also a private part for individual needs, which always pop up and you'd like to add to. As FININ says, compatibility packages are also recommendable. Also translates keyword-introduced comments to code and vice versa, since one always need some code to be individual for each dialect. My experience of all this is quite good. Used it not for a one-time-translation, but even for CONTINUOUS translation, for parallell versions, allowing the development being done in one dialect (1.6) and immediately translate the updated 1.6-code to Maclisp (did the debugging in the latter dialect actually). -lsh  Date: 2 June 1981 14:01 edt From: Benson I. Margulies at MIT-Multics Subject: Found in the Boston Globe Sender: Margulies.Multics at MIT-Multics Reply-To: Margulies at MIT-Multics To: sipb at MIT-MC, multics-lisp-people at MIT-MC, lisp-forum at MIT-MC (defun ad (want-job challenging boston-area) (cond ((not (equal want-job 'yes)) nil) ((not (equal boston-area 'yes)) nil) ((lessp challenging 7) nil) (t (append ((defun nf (a c) (cond ((null c) nil) ((atom (car c)) (append (list (eval (list 'getchar (list (car c) 'a) (cadr c)))) (nf a (cddr c)))) (t (append (list(implode (nf a (car c)))) (nf a (cdr c)))))) (get 'ad 'expr) '((caaddr 1 caadr 2 car 1 car 1) (car 5 cadadr 9 cadadr 8 cadadr 9 caadr 4 car 2 car 1) (car 2 caadr 4))) (list '851-5071x2661)) )))  Date: 29 May 1981 23:03-EDT From: Kent M. Pitman Subject: DICK@ML's #, problems To: LISP-FORUM at MIT-MC These were Maclisp-specific problems. I have replied to him and BUG-COMPLR and suggested that Maclisp-Forum be used for any further discussion.  Date: 29 May 1981 19:04-EDT From: Richard C. Waters To: BUG-COMPLER at MIT-ML, LISP-FORUM at MIT-ML further further investigation shows that #,form does not seem to uniquize when interpreted. rather the compler does something extra to cause uniquization to happen. I think this argues further that the complr shouldn't do this. Dick Waters  Date: 29 May 1981 18:51-EDT From: Richard C. Waters To: BUG-COMPLR at MIT-ML, LISP-FORUM at MIT-ML further investiagation shows that #,form does uniquizing of the list returned by form. This just isn't going to work if form is circular unless it really does it right. I suggest that if nothing else it just stop doing uniquizing unless it is going to do it right! #. can continue to do uniquizing, so you can get the space savings if you want it. Dick Waters  Date: 22 May 1981 00:44-EDT From: Daniel L. Weinreb Subject: OPENF To: LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI The final decision on the proposed OPEN/OPENF changes is to leave things as they are. The reason is that even if we were to change OPEN to take &rest keywords, it still wouldn't be "standard" because it would not take alternating keywords and values. For OPEN to take alternating keywords and values is not really desirable anyway as it increases verbosity with no increase in clarity. While in the future we will attempt to maintain the standard form of keywords for new functions, OPEN will continue to work the way it currently does. (There were other issues too but I'm not in flame mode...)  Date: 21 May 1981 1744-MDT From: STOUTEMYER at UTAH-20 Subject: MAILING LIST To: LISP-FORUM at MIT-MC COULD YOU PLEASE PUT ME ON YOUR MAILING LIST? -------  Date: 19 May 1981 (Tuesday) 1909-EDT From: PLATTS at WHARTON-10 (Steve Platt) Subject: interlisp-like primitives in lisp To: FININ at WHARTON-10 cc: lisp-forum at MIT-AI ...some are available in the LISP F3 package, which implements a (rather minimal) lisp in fortran, and builds INTERLISP-like structures on top of that. The higher-level fns are written in basic LISP, and might be translatable to FRANZ LISP with minimal problems. I forget the extent of the subset implemented, but I'll later look it up if you send your needs to me. -Steve Platt  Date: 19 May 1981 17:14 cdt From: VaughanW at HI-Multics (Bill Vaughan) Subject: franz lisp & NIL Sender: VaughanW.REFLECS at HI-Multics To: lisp-forum at MIT-MC cc: VaughanW at HI-Multics I am unfamiliar with either Franz LISP or NIL and would appreciate pointers to any descriptive literature, implementations etc. (Currently building a LISP interpreter for my TRS80 and looking for good idea sources; memory a major limitation.)  Date: 11 May 1981 (Monday) 1339-EDT From: FININ at WHARTON-10 (Tim Finin) Subject: intra-dialect Lisp translation To: lisp-forum at MIT-AI I am involved in a project to translate a large InterLisp system to run in Franz Lisp. We envision this process as having two components: (1) a system which translates InterLisp source code to Franz Lisp source code, and (2) a set of packages which create a run-time environment in Franz Lisp which is more similar to InterLisp's. One of our goals is to produce a translation system which will allow us to take a new release of the InterLisp version and produce a Franz version with less than one man-week's worth of effort. I am very interested in hearing about: o existing translators which go from one dialect of Lisp to another (especially from InterLisp to FranzLisp or MacLisp) o People's experience with automatic translators - how good can they be? o existing packages which implement standard InterLisp utilities (e.g. Record package, Decl package) - Tim Finin -  Date: 9 May 1981 18:29-EDT From: Kent M. Pitman Subject: Mailing-list additions To: LISP-FORUM at MIT-MC I have added VaughanW.REFLECS@HI-Multics to the list. I will create a LISP-FORUM-REQUEST list which people should be encouraged to send to in the future for additions/deletions to LISP-FORUM. This being a large mailing list, its numerous recipients shouldn't all be bothered with such requests. -kmp  Date: 8 May 1981 15:59 cdt From: VaughanW at HI-Multics (Bill Vaughan) Subject: please add me to the mailing list Sender: VaughanW.REFLECS at HI-Multics To: lisp-forum at MIT-AI cc: VaughanW at HI-Multics please add me to the mailing list. thanx. Bill Vaughan VaughanW at HI-Multics  Date: 2 May 1981 16:16-EDT From: Carl W. Hoffman To: DLA at MIT-EECS cc: dlw at MIT-AI, LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI Date: 1 May 1981 1845-EDT From: David L. Andre Actually, the more I think about it, a syntax such as (OPENF "FOO" ':MODE ':READ ...) *would* be awful convenient in a lot of places... I would also like to see OPENF created rather than the syntax of OPEN changed, for name symmetry with the rest of the file operations. Should CLOSEF also be created? (Actually, I would rather see the names OPEN-FILE, PROBE-FILE, DELETE-FILE, etc used.) Names like OPEN should be left for new users writing small programs, so they won't get blown out of the water by redefining a short and simple name which happens to be used by the system. OPEN just isn't used as frequently as IF, DO, or SETQ to warrant a four or five letter name.  Date: 1 May 1981 1845-EDT From: David L. Andre To: dlw at MIT-AI, LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI, DLA at MIT-EECS In-Reply-To: Your message of 27-Apr-81 1135-EDT Before you change the syntax of OPEN, I believe that it currently allows (open "FOO" ':OUT), and variations. Thus there is an ambiguity with (open "FOO" ':OUT ':NOERROR): Is :NOERROR an exception handler or not? Actually, the more I think about it, a syntax such as (OPENF "FOO" ':MODE ':READ ...) *would* be awful convenient in a lot of places... -- Dave -------  Date: 27 April 1981 13:45-EDT From: Daniel L. Weinreb To: DLA at MIT-EECS cc: LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI Oh, now I see. Yes, you're right. The only question is whether to bother introducing the name OPENF, since it does look like the new functinality could be added to OPEN and WITH-OPEN-FILE without changing the old functionality. If nobody says anything for a while, I will change OPEN and WITH-OPEN-FILE to accept the new syntax as well as the old, with the aim of leaving it that way forever so that we do not have to force the users to change their code, and so that we don't need new versions of both of these things. If anyone disagrees, say so and I will cease and desist. I disagree about things "optionally taking arguments". There are two reasons to prefer the convention that all keywords should always take one argument. One reason is that if you have a keyword whose presence or absence is serving as a flag, then it is hard to take a boolean value and pass that on to a function as a flag. That is, suppose function FOO has a :BINARY-P keyword argument. Then (foo ':bar 4 ':binary-p x ':baz 5) is simpler than (if x (foo ':bar 4 ':binary-p ':baz 5) (foo ':bar 4 ':baz 5)) or (apply #'foo (append (if x '(:binary-p) nil) '(:bar 4 ':baz 5))) or anything else I can think of. The other reason is that keyword argument lists can be used as disembodied property lists: (defun foo (x &rest options) (let ((plist (locf options))) (if (get plist ':binary-p) ...))) So I advocate adopting the convention that all keywords always be followed by an argument. This does have the problem that it becomes slightly more verbose to do some things. For example, (open "foo;bar" ':read) is easier to type than (open "foo;bar" ':mode ':read) or whatever. I am not sure what to do about this. Opinions?  Date: 27 Apr 1981 1123-EDT From: David L. Andre To: dlw at MIT-AI cc: LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI, DLA at MIT-EECS In-Reply-To: Your message of 27-Apr-81 0048-EDT Oh, well, I guess I shouldn't send messages when I'm totally burned out, because you're not the first to say you couldn't read my message... The difference between the kind of keyword arguments that OPEN takes and the kind of keyword arguments which other functions such as MAKE-ARRAY, LOAD-PATCHES, etc use, is that OPEN takes a LIST of keywords, whereas the other examples take an evaluated &rest arg. My comment was basically, why don't you reach some kind of convention in keyword arguments rather than have this difference? So OPEN would translate to OPENF as follows: (OPEN "DLA; ILLIT ERATE" '(:NOERROR :READ :BYTE-SIZE 4) 'MY-EXCEPTION-HANDLER) (OPENF "DLA; ILLIT ERATE" ':NOERROR ':READ ':BYTE-SIZE 4 ':EXCEPTION-HANDLER 'MY-EXCEPTION-HANDLER) Actually, this illustrates yet another inconsistency in keyword arguments, in that some functions (like LOAD-PATCHES) have keywords which optionally take arguments, and others (like MAKE-ARRAY) take keywords which ALWAYS take arguments. I personally like the "optionally taking arguments", but I've heard differently from others... -- Dave -------  Date: 27 April 1981 00:46-EST From: Daniel L. Weinreb To: DLA at MIT-AI, LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI I don't understand this message. I don't see what you find wrong with the keywords that OPEN and WITH-OPEN-FILE take. They already ARE similar in style to those taken by LOAD-PATCHES, as far as I can tell. Could you be more specific?  Date: 24 April 1981 23:51-EST From: David L. Andre To: LISP-FORUM at MIT-AI, BUG-LISPM at MIT-AI cc: DLA at MIT-EECS The keyword arguments to OPEN are a rather inconsistent handover from MACLISP. How about a function OPENF which takes a pathname and then keywords a la LOAD-PATCHES? I think WITH-OPEN-FILE could be changed to this also. Of course, if you want old and new versions of stuff floating around, you could make OPEN accept both old and new formats, and ditto for WITH-OPEN-FILE. -- Dave