Received: from SU-AI.ARPA by MIT-MC.ARPA; 8 MAR 85 18:02:45 EST Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 8 Mar 85 11:40:41 PST Date: 8 Mar 85 14:39:47 EST From: Charles Hedrick Subject: Re: file-length To: common-lisp@SU-AI.ARPA In-Reply-To: Message from ""Scott E. Fahlman" " of 7 Mar 85 23:25:00 EST Tops-20 requires a GTJFN before we can find file length. But we can certainly implement it by opening the file temporarily inside file-length. So you shouldn't stop just because some OS's may need to open the file. However you should consider that this may lead to ambiguity because of version numbers. Consider the following code sequences: (check-attributes "foo.bar") (if (attributes-look-good) (read-file "foo.bar")) or (write-file "foo.bar") (look-at-attributes "foo.bar") The problem with both of these is that some other program may create a new version of foo.bar in the interim. If we say that you can look at attributes only for an open stream, then people are more likely to open the file once, and do their I/O and attribute-checking on the open stream. On TOPS-20, we could get the same effect by encouraging people to do TRUE-NAME the first time they refer to a file. Then they can be sure they are always talking about the same one. But on Unix this wouldn't work. So the safest thing is probably to keep the current definition. If somebody really does want to look at the length of a file and will never be opening it, you can define a macro to allow a syntax like (with-temporarily-open-file "foo.bar" x (file-length x)) -------  Received: from SU-AI.ARPA by MIT-MC.ARPA; 8 MAR 85 17:50:07 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 8 Mar 85 14:42:10 PST Received: ID ; Fri 8 Mar 85 17:41:31-EST Date: Fri, 8 Mar 1985 17:41 EST Message-ID: From: Rob MacLachlan To: Charles Hedrick Cc: common-lisp@SU-AI.ARPA Subject: file-length In-reply-to: Msg of 8 Mar 1985 14:39-EST from Charles Hedrick Your argument is bogus, since there is no guarantee that arbitrary operating systems will prevent a file from being overwritten while you have it open. In particular, Spice has no concept of an open file, thus no assurance of consistency. Unix has no guarantee of fileystem consistency of any kind, so worrying about it is a lost cause. Rob  Received: from SU-AI.ARPA by MIT-MC.ARPA; 8 MAR 85 11:34:37 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 8 Mar 85 08:27:04 PST Received: ID ; Fri 8 Mar 85 11:26:08-EST Date: Fri, 8 Mar 1985 11:25 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Kent M Pitman Cc: common-lisp@SU-AI.ARPA Subject: DO-SYMBOLS finding shadowed symbols In-reply-to: Msg of 8 Mar 1985 00:29-EST from Kent M Pitman I've been having some second thoughts about shadowed symbols in DO-SYMBOLS myself. It really is pretty dirty conceptually to have DO-SYMBOLS supply anything that is not really accessible in the package. I guess there are some ways that implementors could hack this without incurring unacceptable costs. For example, we could set a bit somewhere, either in the symbol itself or in the hash-table entry, for any symbol that has ever been shadowed in any other package. For the small minority of symbols with this bit set, we would have to do the extra checking to make sure that it really is accessible; for all other symbols, we could skip this. We wouldn't have to worry about un-setting this bit if a symbol gets un-shadowed -- it's just a heuristic. There are lots of variations on this theme. The point is that it need not involve unacceptable overhead to eliminate shadowed symbols, though it does make a bit of extra work for implementors. I guess I'd opt in this case to keep the language clean rather than the implementation. So my revised suggestion is that DO-SYMBOLS does provide all accessible symbols including those inherited via USE-PACKAGE. It will present each symbol at least once (may be more than once in cases of explicit importantion), but will not present any symbols that are not really accessible in the package in question due to shadowing. -- Scott  Received: from SU-AI.ARPA by MIT-MC.ARPA; 8 MAR 85 00:37:53 EST Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 21:30:01 PST Received: from SCRC-RIO-DE-JANEIRO by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 192222; Fri 8-Mar-85 00:28:37-EST Date: Fri, 8 Mar 85 00:29 EST From: Kent M Pitman Subject: DO-SYMBOLS finding shadowed symbols To: Fahlman@CMU-CS-C.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <850308002930.5.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Date: Mon, 4 Mar 1985 23:24 EST From: "Scott E. Fahlman" In my earlier note, I sent out my view of how to patch the existing description. This is my view of "how it ought to be" rather than "what the manual says". Does anyone object to adopting this as the "official" interpretation? The only thing I'm uneasy about is letting DO-SYMBOLS generate shadowed symbols, but I don't see any good alternative. I'm also quite uneasy about this shadowed symbols issue. I see two situations: The first is interactive. When working interactively, you resort to mapping when the space of things you're frobbing is too large to do manually. In that case, you want all the help you can get and writing the code to keep out shadowed symbols may be something you forget or find very hard to write correctly the first time (and you may only get one chance, unless you're very careful). Undoing an incorrect mapping operation involving INTERN may be massively hard. The second is non-interactive. There you have the time to write and debug an algorithm, but you may not be thinking about the shadowed symbol case and may forget to write it in because it doesn't occur in the scenario you're thinking about or whatever ... so you release code that mostly works but has a bug sitting in it just waiting to happen. If you make DO-SYMBOLS find symbols that are technically not accessible, you'd better document it well. But personally, I don't advise it, even if it makes coding it much harder. Better you, the implementor, spend the time doing it once right than users spend the time over and over again correcting for your ... I was going to say laziness, but that's obviously the wrong word for people like all the CL implementors are. Nevertheless, you get my point. "An ounce of prevention, ..." and all that. As to worrying about duplicated symbols, that doesn't strike me as as big a deal, though I'm open to arguments to the contrary. -kmp  Received: from SU-AI.ARPA by MIT-MC.ARPA; 8 MAR 85 00:20:41 EST Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 21:12:17 PST Received: from SCRC-RIO-DE-JANEIRO by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 192195; Fri 8-Mar-85 00:10:59-EST Date: Fri, 8 Mar 85 00:11 EST From: Kent M Pitman Subject: FILE-LENGTH To: Fahlman@CMU-CS-C.ARPA, dzg@CMU-CS-SPICE.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: , <6590.167185583.dzg> Message-ID: <850308001150.4.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Date: Thu 07 Mar 85 17:26:25-EST From: dzg@CMU-CS-SPICE ... For one thing, it always forces you to write something like (when probe-file FILE-NAME (with-open-file (little-dummy-stream FILE-NAME :direction :input) ... (file-length little-dummy-stream) ... even when you don't want to read the file at all, but just see how big it is... I agree this is lousy. I hate idiomatic usages like this. They're a pain to read even when you recognize them and some people seem to have a talent for picking ways to code the same idea in ways that are much less intelligible than you could ever guessed could be devised... Why not just let FILE-LENGTH take an optional ELEMENT-TYPE argument to be used if the operation has to be simulated. If the file is already open and is not of the given ELEMENT-TYPE, then signal an error if it's not possible to convert from the given element type to the type the file is open in. If the file is not open (ie, arg was a path instead of a stream), then if the operating system primitively supports a file-length operation on closed files and the ELEMENT-TYPE given is compatible with the type of the file, then allow the system to optimize out the idiomatic OPEN given above, and otherwise just simulate it in whatever way is convenient. Seems to me like that would be portable. -kmp  Received: from SU-AI.ARPA by MIT-MC.ARPA; 7 MAR 85 23:33:51 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 20:26:37 PST Received: ID ; Thu 7 Mar 85 23:26:01-EST Date: Thu, 7 Mar 1985 23:25 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: dzg@CMU-CS-SPICE.ARPA Cc: common-lisp@SU-AI.ARPA Subject: file-length In-reply-to: Msg of 7 Mar 1985 17:26-EST from dzg at CMU-CS-SPICE Dario, I dimly remember some discussion of this before, but I don't have time right now to try to track this down in the archives at SAIL. I think it was specified that the file had to be open in order for File-Length to work because a number of operating systems can't do it any other way, so in portable code you can only count on the open-stream case. There's no reason why non-brain-damaged systems like Spice could not extend this call as you propose, or provide some alternative call that as a non-standard extension. (The latter would make it easier to catch non-portable usage, but users would have to remember two different calls that do roughly the same thing -- confusing at best.) -- Scott  Received: from SU-AI.ARPA by MIT-MC.ARPA; 7 MAR 85 20:19:16 EST Received: from CMU-CS-SPICE.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 17:07:56 PST Apparently-From: dzg@CMU-CS-SPICE 'common-lisp@SU-AI.ARPA' Date: Thu 07 Mar 85 17:26:25-EST From: dzg@CMU-CS-SPICE To: common-lisp@SU-AI.ARPA Subject: file-length Message-ID: <6590.167185583.dzg> I don't remember this being discussed on this bboard, so here it goes... I think the definition of (file-length) on page 425 of the aluminum edition leaves something to be desired. In particular, I object to the first sentence: " must be a stream that is open to a file." This requirement was probably dictated by the desire to return the length in exactly the same units that were used to create the file. Reasonable as this seems, it presents a lot of problems. For one thing, it always forces you to write something like (when probe-file FILE-NAME (with-open-file (little-dummy-stream FILE-NAME :direction :input) ... (file-length little-dummy-stream) ... even when you don't want to read the file at all, but just see how big it is. This is not exactly clear, I think. Things are a lot worse, though, when you move into a networking environment. One such environment, which I will use as an example, is Spice. Imagine now you want to check the length of a remote file in Spice; imagine, say, that you are worried about whether you'd be able to retrieve a huge database file and fit in on your local disk. The Spice networking stuff is perfectly happy with a "length of remote file" request, and will tell you you do not have room. Imagine, though, that you wanted to do the same thing from within Common Lisp. Well, you have to play the little (with-open-file ...) game. BUT, in the Spice world (as in many, many other worlds) this means you indeed want to read the file, all of it! As a result, the file is shipped over to your machine, taking 5 minutes if you are lucky. And if you are unlucky, the file will overflow your paging space (not your file-system space, obviously, because you are NOT retrieving the file to the file system, by golly, just checking its size!). All of this is complete nonsense, obviously, when you could have gotten the same reply in a millisecond if you just didn't have to open the file. I suppose one could expand on the idea of (open :direction :probe), but this still seems artificial. Why not just say that file-length takes "a filename or a stream that is open to a file", like file-write-date, and maybe let you specify what units you want file-length to return?  Received: from SU-AI.ARPA by MIT-MC.ARPA; 7 MAR 85 20:17:47 EST Received: from CMU-CS-SPICE.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 17:07:28 PST Apparently-From: dzg@CMU-CS-SPICE Date: Thu 07 Mar 85 17:26:25-EST From: dzg@CMU-CS-SPICE To: common-lisp@SU-AI.ARPA Subject: file-length Message-ID: <6590.167185583.dzg> I don't remember this being discussed on this bboard, so here it goes... I think the definition of (file-length) on page 425 of the aluminum edition leaves something to be desired. In particular, I object to the first sentence: " must be a stream that is open to a file." This requirement was probably dictated by the desire to return the length in exactly the same units that were used to create the file. Reasonable as this seems, it presents a lot of problems. For one thing, it always forces you to write something like (when probe-file FILE-NAME (with-open-file (little-dummy-stream FILE-NAME :direction :input) ... (file-length little-dummy-stream) ... even when you don't want to read the file at all, but just see how big it is. This is not exactly clear, I think. Things are a lot worse, though, when you move into a networking environment. One such environment, which I will use as an example, is Spice. Imagine now you want to check the length of a remote file in Spice; imagine, say, that you are worried about whether you'd be able to retrieve a huge database file and fit in on your local disk. The Spice networking stuff is perfectly happy with a "length of remote file" request, and will tell you you do not have room. Imagine, though, that you wanted to do the same thing from within Common Lisp. Well, you have to play the little (with-open-file ...) game. BUT, in the Spice world (as in many, many other worlds) this means you indeed want to read the file, all of it! As a result, the file is shipped over to your machine, taking 5 minutes if you are lucky. And if you are unlucky, the file will overflow your paging space (not your file-system space, obviously, because you are NOT retrieving the file to the file system, by golly, just checking its size!). All of this is complete nonsense, obviously, when you could have gotten the same reply in a millisecond if you just didn't have to open the file. I suppose one could expand on the idea of (open :direction :probe), but this still seems artificial. Why not just say that file-length takes "a filename or a stream that is open to a file", like file-write-date, and maybe let you specify what units you want file-length to return?  Received: from SU-AI.ARPA by MIT-MC.ARPA; 7 MAR 85 20:17:46 EST Received: from CMU-CS-SPICE.ARPA by SU-AI.ARPA with TCP; 7 Mar 85 17:07:05 PST Apparently-From: dzg@CMU-CS-SPICE Date: Thu 07 Mar 85 17:26:25-EST From: dzg@CMU-CS-SPICE To: common-lisp@SU-AI.ARPA Subject: file-length Message-ID: <6590.167185583.dzg> I don't remember this being discussed on this bboard, so here it goes... I think the definition of (file-length) on page 425 of the aluminum edition leaves something to be desired. In particular, I object to the first sentence: " must be a stream that is open to a file." This requirement was probably dictated by the desire to return the length in exactly the same units that were used to create the file. Reasonable as this seems, it presents a lot of problems. For one thing, it always forces you to write something like (when probe-file FILE-NAME (with-open-file (little-dummy-stream FILE-NAME :direction :input) ... (file-length little-dummy-stream) ... even when you don't want to read the file at all, but just see how big it is. This is not exactly clear, I think. Things are a lot worse, though, when you move into a networking environment. One such environment, which I will use as an example, is Spice. Imagine now you want to check the length of a remote file in Spice; imagine, say, that you are worried about whether you'd be able to retrieve a huge database file and fit in on your local disk. The Spice networking stuff is perfectly happy with a "length of remote file" request, and will tell you you do not have room. Imagine, though, that you wanted to do the same thing from within Common Lisp. Well, you have to play the little (with-open-file ...) game. BUT, in the Spice world (as in many, many other worlds) this means you indeed want to read the file, all of it! As a result, the file is shipped over to your machine, taking 5 minutes if you are lucky. And if you are unlucky, the file will overflow your paging space (not your file-system space, obviously, because you are NOT retrieving the file to the file system, by golly, just checking its size!). All of this is complete nonsense, obviously, when you could have gotten the same reply in a millisecond if you just didn't have to open the file. I suppose one could expand on the idea of (open :direction :probe), but this still seems artificial. Why not just say that file-length takes "a filename or a stream that is open to a file", like file-write-date, and maybe let you specify what units you want file-length to return?  Received: from SU-AI.ARPA by MIT-MC.ARPA; TUE 5 MAR 1985 1847 EST Received: from MIT-EDDIE.ARPA by SU-AI.ARPA with TCP; 5 Mar 85 15:41:02 PST Received: by mit-eddie.ARPA (4.12/4.8) id AA06352; Tue, 5 Mar 85 18:35:36 est Message-Id: <8503052335.AA06352@mit-eddie.ARPA> Received: by godot with CHAOS id AA07351; Tue, 5 Mar 85 18:19:29 est Date: Tuesday, 5 March 1985, 18:20-EST From: Guy Steele Subject: Complex arc tangent To: mit-eddie!common-lisp%sail.arpa@GODOT Cc: gls@AQUINAS I spoke with Paul Penfield yesterday, and he says that he now agrees with Kahan on the treatment of the branch cut for complex arc tangent. He proposes to write some sort of joint letter recommending to the APL and Common Lisp communities that this Kahan's definition of the branch cut be adopted. This would mean a (small) change to Common Lisp if we endorse it. Then everyone who cares at all about complex functions in IEEE arithmetic will agree on the branch cuts. Are there any objections? --Guy  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 2345 EST Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 20:36:18 PST Date: 4 Mar 85 23:34:48 EST From: Charles Hedrick Subject: Re: do-xxx-symbols To: Fahlman@CMU-CS-C.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: Message from ""Scott E. Fahlman" " of 4 Mar 85 23:24:00 EST I had forgotten that you wrote the chapter. I think that for the author of a section to say "this is what I meant" is as authoritative as we can reasonably expect. The only way I can see to guarantee only one occurence, and no shadowed symbols is to hash each symbol before passing it to the user. We did that in one implementation. It isn't that horrible. Hashes are supposed to be fast. The idea is that you keep a list of the packages that you have looked at so far. For each candidate symbol, you look it up in all the hash tables you have already used. If it is found, then you forget that symbol. If you search the packages in the right order, this should do the right thing. The cost of this depends upon how expensive the contents of the loop are. It is going to perform N/2 hash lookups per symbol, where N is the length of the use list. I conjecture that this will be on the order of 3. 3 hash lookups isn't that much. But I certainly won't do it unless I am told to. I can think of uses of both possible semantics. If you remove duplicates, it is useful for things like APROPOS. If you don't, it is useful for code that manipulates the package system. E.g. you might use it to look for duplicates, shadowed symbols, etc. So it is important to make sure we know which thing is to be done. Don't say something like: well, we should really only show him the symbol once, but some implementations may not want to go to that work, so it's OK to ignore the problem. -------  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 2336 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 20:25:46 PST Received: ID ; Mon 4 Mar 85 23:24:38-EST Date: Mon, 4 Mar 1985 23:24 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Charles Hedrick Cc: common-lisp@SU-AI.ARPA Subject: do-xxx-symbols In-reply-to: Msg of 4 Mar 1985 21:50-EST from Charles Hedrick Can we get an authoritative statement about what is covered by DO-SYMBOLS and DO-EXTERNAL-SYMBOLS, and about whether duplicates and shadowed symbols should be removed? Well, I think that the machinery for issuing authoritative statements is pretty rusty right now. I hope that we will soon have some sort of duly constituted authority, but right now all we have is the ad hoc executive committee of which I am the ad hoc chairman. Hedrick's note points out that the description in the manual is ambiguous at best. OK. I wrote most of that (with input from Moon and Steele, among others) and I'm sure that DO-SYMBOLS was meant to include symbols inherited from used packages. It never occurred to me that explicit IMPORT statements could set up a situation in which a symbol might show up twice unless you go to a lot of trouble to prevent this. I also didn't think about shadowed symbols. In my earlier note, I sent out my view of how to patch the existing description. This is my view of "how it ought to be" rather than "what the manual says". Does anyone object to adopting this as the "official" interpretation? The only thing I'm uneasy about is letting DO-SYMBOLS generate shadowed symbols, but I don't see any good alternative. -- Scott  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 2159 EST Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 18:50:55 PST Date: 4 Mar 85 21:50:06 EST From: Charles Hedrick Subject: Re: do-xxx-symbols To: Fahlman@CMU-CS-C.ARPA cc: RAM@CMU-CS-C.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: Message from ""Scott E. Fahlman" " of 4 Mar 85 13:46:00 EST There are at least four reasons why the description of DO-SYMBOLS does not appear to include symbols in used packages: 1) "do-symbols provides straightforward iteration over the symbols of a package". Searching all the used packages is not straightforward iteration over the package. 2) There is an implication that UNINTERN can be called to remove symbols from the package. UNINTERN'ing a symbol in a used package does not exactly remove it from the package under discussion. 3) It is said that every symbol is processed once. In DO-ALL-SYMBOLS there is an explicit warning that symbols may occur more than once. The implication is that this is not true for DO-SYMBOLS. If DO-SYMBOLS was supposed to look at used packages, then the same warning would apply to it. 4) DO-EXTERNAL-SYMBOLS says that it is like DO-SYMBOLS, but only the external symbols of the specified package are used. The wording of DO-EXTERNAL-SYMBOLS seems to imply even more clearly that only the specified package is scanned, not any used packages. It also seems to imply that DO-SYMBOLS does the same thing, but for both internal and external symbols. Can we get an authoritative statement about what is covered by DO-SYMBOLS and DO-EXTERNAL-SYMBOLS, and about whether duplicates and shadowed symbols should be removed? I think this question is separate from the APROPOS question. APROPOS with no argument is documented as using DO-ALL-SYMBOLS. But with an argument it is not said to use DO-SYMBOLS. So it would be perfectly acceptable to say that DO-SYMBOLS looks only at the symbols in a particular package, whereas APROPOS following the used chain. (Indeed that is what I have done for the moment.) -------  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 2001 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 09:34:29 PST Received: ID ; Mon 4 Mar 85 12:33:31-EST Date: Mon, 4 Mar 1985 12:33 EST Message-ID: From: Rob MacLachlan To: Charles Hedrick cc: common-lisp@SU-AI.ARPA Subject: do-xxx-symbols In-reply-to: Msg of 3 Mar 1985 21:43-EST from Charles Hedrick Date: Sunday, 3 March 1985 21:43-EST From: Charles Hedrick To: ram Re: do-xxx-symbols The definition of DO-SYMBOLS says that it iterates over all symbols accessible in the specified package, and that it does each once. The term "accessible in" is ambiguous. The code I have from Spice simply iterates over the externals and internals of that package. One could also interpret that as including packages used by the specified package. In that case, some interesting code would be needed to prevent a given symbol from being used more than once, since if IMPORT was done, a symbol could appear in several packages. Does Spice still make the interpretation that DO-SYMBOLS looks only at the package specified? Similarly, APROPOS talks about symbols "available in" a given package. The wording talks about inheritance paths, so the implication seems fairly clear that APROPOS does look at used packages. Even after staring at the manual, I was unable to determine what the intent was. It is unfortunate that after spending N pages developing the theory and terminology, the actual function description blows it. The two possibilities are to iterate over all symbols available in the package, or only over those present in the package. If Do-Symbols is supposed to iterate over all symbols available in the package, then it is clearly impractical to guarantee that each symbol is only done once, so there probably should be a caveat similar to that for Do-All-Symbols in the Do-Symbols description. I think that with some imagination, one could read the statement that each symbol is done once to mean that each symbol is done at least once. There is another problem with this interpretation, though. Before you can iterate over any symbol for a used package, you must check to see if the symbol is shadowed in the inheriting package. This could cause a substantial performance penalty in Do-Symbols, since it would be forced to do a hashtable lookup for each inherited symbol iterated over. This means that doing an apropos in any package using the lisp package would require doing 1000 hashtable lookups. On the other hand, the word accessible could be replaced with present, and everyone would be happy. Apropos would have to be changed to do a Do-External-Symbols on each used package, and would ignore details such as shadowed symbols. Rob  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 1919 EST Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 10:47:57 PST Received: ID ; Mon 4 Mar 85 13:46:53-EST Date: Mon, 4 Mar 1985 13:46 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Rob MacLachlan Cc: common-lisp@SU-AI.ARPA, Charles Hedrick Subject: do-xxx-symbols In-reply-to: Msg of 4 Mar 1985 12:33-EST from Rob MacLachlan I don't think that the term "accessible in" is ambiguous, given the definition on page 172. Where is the ambiguity? So DO-SYMBOLS wants to iterate over all of the internal and external symbols directly present in the current package, and also those symbols that are present because they are external in the packages used by the specified package. I don't think we need to worry about eliminating duplicates that result from explicit calls to IMPORT. That would entail a lot of extra bookkeeping and would usually not be important. The business about shadowing is a bit more complex. Technically, a symbol shadowed in package P is not accessible in P, but again a lot of bookkeeping would be required to eliminate these. Unless someone has a clever idea for how to do this efficiently, we should probably revise the description of DO-SYMBOLS to allow it to ignore this problem. In the reare case where a user really cares, he can check for shadowed symbols himself. In any event, the manual should be clarified on these two points. In the description of APROPOS, the term "available in" is supposed to be "accessible in", so the rules for DO-SYMBOLS should apply if the package is specified, and DO-ALL-SYMBOLS if all packages are to be searched. By the way, several groups have decided that the default for no package specified is bogus, and that if no package is specified the user probably should be given all symbols ACCESSIBLE IN the current package. An argument of T could mean that you really do want to see ALL the symbols, including internal compiler symbols, etc. Opinions? -- Scott  Received: from SU-AI.ARPA by MIT-MC.ARPA; MON 4 MAR 1985 1914 EST Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 4 Mar 85 16:06:25 PST Date: 4 Mar 85 19:05:43 EST From: Charles Hedrick Subject: Re: do-xxx-symbols To: RAM@CMU-CS-C.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: Message from "Rob MacLachlan " of 4 Mar 85 12:33:00 EST My current implementation is the following: DO-ALL-SYMBOLS - all symbols in all packages DO-MOST-SYMBOLS - all symbols in all packages except not internals in LISP or COMPILER (which I claim the user is not interested in seeing in many cases) DO-SYMBOLS - all symbols in the package mentioned DO-ACCESSIBLE-SYMBOLS - all symbols in the package, and all externals in used packages. No attempt to remove shadowed symbols. APROPOS if package is omitted or NIL: DO-MOST-SYMBOLS if package is T: DO-ALL-SYMBOLS otherwise: DO-ACCESSIBLE-SYMBOLS -------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 26 Feb 85 16:48:49 PST Received: ID ; Tue 26 Feb 85 19:50:19-EST Date: Tue, 26 Feb 1985 19:50 EST Message-ID: From: Rob MacLachlan To: hpfclp!paul%hplabs.csnet@CSNET-RELAY.ARPA Cc: common-lisp@SU-AI.ARPA Subject: truncate/round/floor/ceiling functions In-reply-to: Msg of 25 Feb 1985 16:25-EST from hpfclp!paul%hplabs.csnet at csnet-relay.arpa I had already pointed this out in a previous message on the mailing list. The correct interpretation is that the second value is always the remainder. The description of the meaning of the second argument is only correct if one disregards the second value. Rob  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 26 Feb 85 09:09:15 PST Received: from hplabs by csnet-relay.csnet id au20234; 26 Feb 85 11:52 EST Date: Mon, 25 Feb 85 13:25:26 pst From: hpfclp!paul%hplabs.csnet@csnet-relay.arpa MMDF-Warning: Parse error in preceding line at CSNET-RELAY.ARPA Received: by HP-VENUS id AA02146; Mon, 25 Feb 85 13:25:26 pst Message-Id: <8502252125.AA02146@HP-VENUS> To: hplabs!common-lisp@su-ai.ARPA Subject: truncate/round/floor/ceiling functions Source-Info: From (or Sender) name not authenticated. I think that I may have found an inconsistent definition for the functions floor/round/ceiling and trucate. This inconsistency occurs when there are 2 arguments. For example, let us look at `(truncate 10.0 4.0)'. 1). We see that (truncate 10.0 4.0) is defined to be equivalent to (truncate (/ 10.0 4.0)) = (truncate 2.5), which yields (2 0.5). 2.) However, applying the sentence (from pg 216 in the Aluminum Edition) "If any of these functions is given 2 arguments x and y and produces results q and r, then q*y + r = x." yields, for x=10.0 and y= 4.0, q=2 and r=2.0, or a result of (2.0 2.0). We then have 2 different results for `truncate 10.0 4.0', depending upon whether one does the division or not. How is this resolved? Any clarifications would be appreciated. Paul Beiser Hewlett-Packard Ft. Collins, Colorado uucp: ...{ihnp4,hplabs}!hpfcla!paul arpa: "hpfclp!paul%hplabs.csnet"@csnet-relay  Received: from CCA-UNIX.ARPA by SU-AI.ARPA with TCP; 21 Feb 85 13:33:40 PST Received: by cca-unix.ARPA (4.12/4.7) id AA10420; Thu, 21 Feb 85 16:29:22 est Message-Id: <8502212129.AA10420@cca-unix.ARPA> Received: by godot with CHAOS id AA23697; Thu, 21 Feb 85 16:23:58 est Date: Thursday, 21 February 1985, 16:25-EST From: Guy Steele Subject: gcd of rationals--flame added in proof To: rwg%RUSSIAN.SPA.Symbolics.COM@GODOT, ALAN%SCRC-STONY-BROOK.ARPA@GODOT, fateman%ucbdali%UCB-VAX.ARPA@GODOT Cc: rem%MIT-MC.ARPA@GODOT, numerics%SCRC-STONY-BROOK.ARPA@GODOT, common-lisp%SU-AI.ARPA@GODOT, gls@AQUINAS In-Reply-To: <850220015044.3.RWG@LOS-TRANCOS.SPA.Symbolics.COM> The claim in the manual that (lcm) should be infinity is outright wrong, as REM has already pointed out to me. Probably I was indeed misled by the formula (/ (* x y) (gcd x y)); I cannot remember. However, the statement is certainly my fault. I'm sorry. Maybe in the next version of Common Lisp we can safely define (lcm) => 1, as it will be a compatible extension.  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 20 Feb 85 02:16:20 PST Received: from SPA-RUSSIAN by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 181670; Wed 20-Feb-85 04:52:21-EST Received: from SPA-LOS-TRANCOS by SPA-RUSSIAN via CHAOS with CHAOS-MAIL id 102914; Wed 20-Feb-85 01:50:12-PST Date: Wed, 20 Feb 85 01:50 PST From: Bill Gosper Subject: gcd of rationals--flame added in proof To: ALAN@SCRC-STONY-BROOK.ARPA, fateman%ucbdali@UCB-VAX.ARPA cc: rem@MIT-MC.ARPA, numerics@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: <850129023720.8.RWG@RUSSIAN.SPA.Symbolics.COM> Message-ID: <850220015044.3.RWG@LOS-TRANCOS.SPA.Symbolics.COM> Date: Tue, 29 Jan 85 02:37 PST From: Bill Gosper Date: Saturday, 19 January 1985 01:03-EST From: Alan Bawden Date: Friday, 28 December 1984, 01:28-PST From: Bill Gosper Date: Wednesday, 19 December 1984, 23:45-EST From: David C. Plummer in disguise [(GCD 1\4 1\3) errs.] . . . BTW, the correct answer is 1\12. Not as far as ZL is concerned. Nor CL for that matter. Ouch! They must have been jet-lagged at those CL meetings. As Schroeppel pointed out several millenia back, (GCD A\B C\D) = (GCD A C)/(LCM B D), assuming args are in lowest terms. MACSYMA has long known this. You know Gosper, I had always assumed that people who proposed to extend GCD to the rationals were just confused about just what GCD really was, so I was quite surprised to hear that this was an idea endorsed by Schroeppel. So I thought about it for a while... In what sense must the GCD of 1/3 and 1/4 be 1/12? Suppose I pick some subring of Q that contains both 1/3 and 1/4 and I look at the ideal generated by 1/3 and 1/4. Certainly that ideal is the same as the ideal generated by 1/12, and in general the formula you give above will always yeild such a generator, but there are many other elements in the ring that generate that ideal. In fact, since 1/12 is a unit in any subring of Q, that ideal must be generated by 1, so why not say GCD(1/3,1/4)=1? What is the additional constraint you impose that requires 1/12? Well, suppose that instead of thinking about GCD as defined in terms of generators of ideals in subrings of Q, we think about GCD as defined in terms of generators of Z-modules contained in Q. In that case, the formula above gives the unique (modulo sign) generator of the module generated by the arguments to GCD. But that seems somewhat arbitrary; suppose I was working with some ring other than Z, call it A, and I wanted to play with A-modules contained in Q? No problem! Although the formula can't always give you a unique generator, it will give you -one- of the generators, which is certainly the best you can ask for. Summary: I'm convinced. This really is the right way to extend GCD to the rational numbers if you are going to do it at all. (Since my suggestions that GCD, NUMERATOR and DENOMINATOR be extended in the -only- possible way to the complex integers were ignored, I have little hope that Common Lisp will adopt the idea, however.) But the absoLULU is further down page 202 of the CL book: "Mathematically, (lcm) should return infinity." This is so wildly absurd that I can't even guess the fallacy that led to it. It should simply be 1. It is clearly a result of taking the identity (LCM ...) = (/ (ABS (* ...)) (GCD ...)) too seriously. Hell, that's only valid for exactly TWO arguments! (+ A B) is (+ (MAX A B) (MIN A B)), but (+ A B C) isn't (+ (MAX A B C) (MIN A B C)) and (+ A) isn't (+ (MAX A) (MIN A)) ! From: fateman@ucbdali (Richard Fateman) To: rwg@NIMBUS.SPA.Symbolics Subject: Re: Irrational GCD The notion of a GCD over fractions is probably not such a great thing to put in as a primitive in a programming language. I guess it shows how advanced Common Lisp is.. The usual kind of programming language has trouble with REMAINDER. It was a real pain to get straight in macsyma, as I recall. If you worry about unique factorization domains, I believe fields (e.g. rational numbers) don't qualify for the GCD operation. Which means you can probably extend the operation any way you please so long as it coincides on the integers... e.g. gcd (a\b,c\d) == if b=d=1 then gcd(a,c) else 1. Barf, what is all this prissy pedantry? Groups, modules, rings, ufds, patent- office algebra. Barf! GCD once stood for Greatest Common Divisor, an English Phrase. What is the greatest rational that produces and integer when you divide it into 1/3 and 1/4? Answer: 1/12. Another view. Extend unique prime factorization to rationals. Then you got p1^a1 p2^a2 . . . just like with integers, except now the ai can be negative. GCD is still prod pi^min(ai of each arg), and LCM is still max. Even (DEFUN EGCD (A B) (IF (ZEROP B) (ABS A) (EGCD B (\ A B)))) does the right thing with rationals! Then you can (DEFUN ELCM (&REST NUMS) (CL:// (APPLY #'EGCD (MAPCAR #'CL:// NUMS)))), i.e., reciprocate the GCD of the reciprocals. (Like MIN is negative of MAX of negatives.) Why isn't there a CL destructive MAP on sequences??  Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 18 Feb 85 10:28:36 PST Date: 18 Feb 85 13:30:14 EST From: Charles Hedrick Subject: doc strings To: common-lisp@SU-AI.ARPA Does anybody happen to have a set of documentation strings for Common Lisp? The Spice Lisp code that we are using does have it, but for various reasons it would be mildly helpful for us to have a separate file that has all the doc strings. -------  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 14 Feb 85 11:39:40 PST Received: from SCRC-MISSISSIPPI by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 178344; Thu 14-Feb-85 14:41:19-EST Date: Thu, 14 Feb 85 14:41 EST From: Kent M Pitman Subject: Not clearly non-controversial To: brown@DEC-HUDSON.ARPA, COMMON-LISP@SU-AI.ARPA In-Reply-To: The message of 6 Feb 85 12:52-EST from brown at DEC-HUDSON Message-ID: <850214144132.2.KMP@MISSISSIPPI.SCRC.Symbolics.COM> Date: Wed, 06 Feb 85 12:52:15 EST From: brown@DEC-HUDSON Re: Clearly non-controversial USER-NAME [function] A string is returned that identifies the current user of the Common Lisp system. If a reasonable name cannot be determined, NIL is returned (or an empty-string?). (It has been suggested that this should be called something like USER-INFORMATION and take a lot of keyword arguments.) Not just user information, but system information. eg, on the LispM it means little (even in Common Lisp) to know the guy's name is "Joe" unless you know what machine or site that name is relative to. At the least, you'd need a way of getting the machine name, too. The USER-INFORMATION thing sounds a little more plausible, but I'd prefer to see a specific suggestion before saying go ahead. DELETE-SETF-METHOD access-fn [function] Removes the update form associated with the symbol access-fn. The access function can no longer be used as generalized variable. I dislike the use of the morpheme "DELETE-" in this context. The Lisp Machine uses "UN" tacked onto the defining form name, which is easy to remember and doesn't feel like it wants a second argument of a list to delete the thing from. Perhaps we should have a lot of UNthings. Perhaps we should have a special form UNDO which works like SETF but undoes things. eg, (UNDO (DEFINE-SETF-METHOD ...)) would undo a SETF method. (UNDO (DEFUN FOO (X) ...)) would undo a DEFUN. We'd have to define clearly the effect of: (DEFUN F (X) X) (DEFUN F (X) (1+ X)) (UNDO (DEFUN F (X) (1+ X))) If it wasn't to make F be an identity function again, then UNDO might be a slightly misleading name. Anyway, I'd rather address this general problem than add a ton of special case DELETE-mumble or UNmumble functions. COMPILEDP symbol [function] Returns NIL if the symbol does not have a compiled function definition. Returns T if the symbol has a compiled function definition which was not compiled by the COMPILE function; for example, loaded from a compiled file. Returns the interpreted function definition if the function was compiled by the COMPILE function. What about (DEFUN FOO (X) X) (COMPILE 'FOO) (DEFUN F () (FLET ((FOO (Y) (1+ Y))) (COMPILEDP 'FOO))) I would feel better if any COMPILEDP thing took an argument of an object and told you if it were compiled. So you could write (COMPILEDP #'symbol) instead. Personally, I wish COMPILE had been defined to just return its result rather than to also install it because it propagates the same kind of confusion. But I guess it's a little late to worry about that now. -kmp  Date: 12 Feb 85 1931 PST From: Jon White Subject: Your non-controversial suggestions for function names To: brown@DEC-HUDSON.ARPA, common-lisp@SU-AI.ARPA Eric Benson and I quite independently came up with the need for DELETE-PACKAGE last week; oddly enough, we even chose the same name! The 3600 documentation has 'pkg-kill' to de-register a package (its name and all nicknames) and to flush the respective 'use' lists; but we rather agree that DELETE-PACKAGE ought also to make a pass over the atoms in the package to insure that any who are interned with that packge as 'home' have their symbol-package slot smashed [i.e., at least a subset of what unintern would do]. -- JonL --  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 7 Feb 85 22:24:46 PST Received: ID ; Fri 8 Feb 85 01:26:21-EST Date: Fri, 8 Feb 1985 01:26 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Common-Lisp@SU-AI.ARPA Subject: Clearly non-controversial From: brown at DEC-HUDSON Re: Clearly non-controversial DELETE-SETF-METHOD access-fn [function] Removes the update form associated with the symbol access-fn. The access function can no longer be used as generalized variable. How about UN-DEFTYPE and UN-DEFSTRUCT and UN-DEFCONSTANT and so on? All of these would be useful (for the same reason as DELETE-SETF-METHOD is useful -- one is really screwed if there's no way to undo a wrong one). Coming up with a list of all such things is a non-trivial task, and including one without all others could be silly. On the other hand, adding a dozen undo functions to the language, however simple, might stir up some controversy. I don't know the right way to go here. Perhaps implementors might include information on how to undo these things in their red pages. Undoing functions could conceivably be treated as "user interface", since they aren't likely to appear in programs. HASH-TABLE-REHASH-SIZE hash-table [function] HASH-TABLE-REHASH-TRESHOLD hash-table [function] HASH-TABLE-SIZE hash-table [function] HASH-TABLE-TEST hash-table [function] These functions return the values that were specified when the hash-table was created. When it was created? How about the current (possibly after rehashing) values? Was your above description a thinko? Could the rehash parameters be set with SETF? [Rambling ideas: That could be useful for growing hash tables in some non-linear, non-exponential fashion. Maybe :REHASH-SIZE could also be a function, in which case it takes an integer (the old size) and returns an integer (the new size). :REHASH-THRESHOLD could likewise be a two-arg predicate. This would let you rehash hash tables every Tuesday, for example.] DELETE-PACKAGE package [function] Uninterns all the symbols interned in package, deletes the package, and unuses all packages it uses. An error is signaled if the package is used by any other package. Yes, that's definitely a good thing to have. Hint for programmers living without this: use Rename-Package to get rid of the bogus package. (This list used to contain TERMINALP which returned T if its argument (string, pathname, etc.) was an "interactive terminal". I think this function would be useful, but decided I was unable to define what a terminal is, so removed it.) I don't think all implementations could accurately answer the "is this stream backed by a user" question. I guess a second value indicating whether or not the answer was determined for sure would have to be returned. What would you use such a thing for? I can only think of things that aren't really good things for supposedly portable programs to do. In line with the other stream type predicate functions (e.g. Input-Stream-P), perhaps the name could be Interactive-Stream-P? --Skef  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 6 Feb 85 18:46:50 PST Received: ID ; Wed 6 Feb 85 21:48:04-EST Date: Wed, 6 Feb 1985 21:47 EST Message-ID: From: Rob MacLachlan To: REM@MIT-MC.ARPA cc: common-lisp@SU-AI.ARPA Subject: of course you don't copy everything that includes that structure!! In-reply-to: Msg of 1985 Feb 06 17:41:06 PST (=GMT-8hr) from Robert Elton Maas Date: 1985 February 06 17:41:06 PST (=GMT-8hr) From: Robert Elton Maas To: RAM at CMU-CS-C Re: of course you don't copy everything that includes that structure!! > Query: Does a copier for a structure correctly copy all structures > which include that structure? ABSOLUTELY NOT!!!! It's possible you had a typographic error there and said the exact opposite of what you meant, but I'll assume no typo and answer exactly what you said. I think that you totally spaced what I was talking about, which was structures and copiers as created by defstruct, with include meaning defstruct inclusion. Looking back at my message, I noticed that I only mentioned Defstruct in the subject... I have had some second-thoughts thought about the issue some since I sent the message, but I still don't think that the answer is blatently obvious. The thing that I realized is that if the canonical implementation of defstruct copiers is the one that we have chosen, in which there is basically one generic defstruct copier function that copies all types of structures, then it is kind of silly to have distinct named copier functions for each structure type, since there could just as easily be one Copy-Structure function. Presumably the rationale behind having distinct copier functions is that they could be more efficient, since they could be specialized for the particular structure that they copy. The problem is that any likely scheme for significantly specializing the copier function will probably not have the property that subtypes (by inclusion) of that structure are correctly copied. Rob  Received: from DEC-HUDSON.ARPA by SU-AI.ARPA with TCP; 6 Feb 85 09:49:38 PST Date: Wed, 06 Feb 85 12:52:15 EST From: brown@DEC-HUDSON Subject: Clearly non-controversial To: common-lisp@su-ai Here are several clearly non-controversial functions which I think should be considered for inclusion in the language. USER-NAME [function] A string is returned that identifies the current user of the Common Lisp system. If a reasonable name cannot be determined, NIL is returned (or an empty-string?). (It has been suggested that this should be called something like USER-INFORMATION and take a lot of keyword arguments.) DELETE-SETF-METHOD access-fn [function] Removes the update form associated with the symbol access-fn. The access function can no longer be used as generalized variable. HASH-TABLE-REHASH-SIZE hash-table [function] HASH-TABLE-REHASH-TRESHOLD hash-table [function] HASH-TABLE-SIZE hash-table [function] HASH-TABLE-TEST hash-table [function] These functions return the values that were specified when the hash-table was created. DELETE-PACKAGE package [function] Uninterns all the symbols interned in package, deletes the package, and unuses all packages it uses. An error is signaled if the package is used by any other package. UNCOMPILE symbol [function] Restores the interpreted function definition of a symbol if the symbol's function definition was compiled by the COMPILE function. COMPILEDP symbol [function] Returns NIL if the symbol does not have a compiled function definition. Returns T if the symbol has a compiled function definition which was not compiled by the COMPILE function; for example, loaded from a compiled file. Returns the interpreted function definition if the function was compiled by the COMPILE function. (This list used to contain TERMINALP which returned T if its argument (string, pathname, etc.) was an "interactive terminal". I think this function would be useful, but decided I was unable to define what a terminal is, so removed it.) -Gary Brown  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 4 Feb 85 12:48:07 PST Received: from SCRC-EUPHRATES by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 171683; Mon 4-Feb-85 15:44:04-EST Date: Mon, 4 Feb 85 15:41 EST From: David A. Moon Subject: FLET and the INLINE declaration To: Common-Lisp@SU-AI.ARPA Message-ID: <850204154157.1.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Mon, 4 Feb 85 14:20 EST From: Charles Hornig ....This is a good idea except for one problem: where do you put the declaration? FLET and LABELS do not allow you to put declarations before their bodies. Perhaps this is a language bug. (flet ((plus1 (x) (1+ x))) (declare (inline plus1)) ;; This declare is illegal in the CLM (list (plus1 1) (plus1 2))) You have to say (flet ((plus1 (x) (1+ x))) (locally (declare (inline plus1)) (list (plus1 1) (plus1 2)))) I think this is a language bug and declarations, other than declarations of variable bindings, should have been allowed at this position in FLET, LABELS, and MACROLET. Does anyone else have an opinion?  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 3 Feb 85 08:33:39 PST Received: ID ; Sun 3 Feb 85 11:35:09-EST Date: Sun, 3 Feb 1985 11:35 EST Message-ID: From: Rob MacLachlan To: common-lisp@SU-AI.ARPA Subject: Defstruct copiers and included structures Query: Does a copier for a structure correctly copy all structures which include that structure? I would argue that it should, both on esthetic and practical grounds. Since the including structure is a subtype of the included structure, it should inherit all operations on that structure. It would also be useful to be able count on this behavior. It turns out that our implementation already does this, since it uses the same copier function for all structures. Rob  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 29 Jan 85 13:28:02 PST Received: from hplabs by csnet-relay.csnet id ah07582; 29 Jan 85 16:13 EST Received: by HP-VENUS id AA16614; Mon, 28 Jan 85 07:15:34 pst Message-Id: <8501281515.AA16614@HP-VENUS> Date: Sun 27 Jan 85 19:29:41-PST From: Martin Subject: Re: how to document yellow page entries, belated discussion by REM To: Fahlman, REM%IMSSS@su-score.ARPA Cc: COMMON-LISP@su-ai.ARPA, GRISS@hplabs.CSNET In-Reply-To: Message from ""Scott E. Fahlman" " of Sat 26 Jan 85 23:26:30-PST Source-Info: From (or Sender) name not authenticated. I agree with Scott. We use (as Bob Kessler pointed out) a comment convention. Most of the time its ignored, but certain programs extract the stuff as desired. M -------  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 29 Jan 85 03:35:31 PST Date: Tue, 29 Jan 1985 06:34 EST Message-ID: From: PGS%MIT-OZ@MIT-MC.ARPA To: common-lisp@SU-AI.ARPA Subject: definition of lexical closure In-reply-to: Msg of 27 Jan 1985 19:57-EST from PGS Date: Sunday, 27 January 1985 19:57-EST From: PGS To: common-lisp at SU-AI.ARPA Re: definition of lexical closure I was talking to someone today about the Common Lisp notion of a closure, and I noticed that the definition of a lexical closure on page 87 of the CL manual is sort of vague and unsatisfactory. It says that a lexical closure is the object returned by FUNCTION, that is, a function which, when invoked, obeys the rules for lexical scoping. The problem with this definition is that it makes it difficult to explain why the thing referred to is called a closure. It's hard to relate the definition even vaguely to the set-theoretic notion of a closure, because it doesn't describe a set, so one can't explain the term as coming from the notion of `closing over' a set of bindings... Oops, I take it all back. I dug out my copy of Kleene and found that I really wanted the predicate calculus notion of closure, not the set-theoretic notion of closure.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 27 Jan 85 17:00:58 PST Date: Sun, 27 Jan 1985 19:57 EST Message-ID: From: PGS%MIT-OZ@MIT-MC.ARPA To: common-lisp@SU-AI.ARPA Subject: definition of lexical closure I was talking to someone today about the Common Lisp notion of a closure, and I noticed that the definition of a lexical closure on page 87 of the CL manual is sort of vague and unsatisfactory. It says that a lexical closure is the object returned by FUNCTION, that is, a function which, when invoked, obeys the rules for lexical scoping. The problem with this definition is that it makes it difficult to explain why the thing referred to is called a closure. It's hard to relate the definition even vaguely to the set-theoretic notion of a closure, because it doesn't describe a set, so one can't explain the term as coming from the notion of `closing over' a set of bindings. If you say that the reason why lexical closures are functions in Common Lisp, and not just sets of bindings, is because there is this sense of a closure not only of the bindings in the definition environment, but of the bindings of the formals to the actuals in the invocation of a function, you create this other pedagogical problem, which is that the thing returned by FUNCTION isn't a lexical closure then, because it can't include the bindings created by function invocation, because those bindings are created afresh whenever the function is invoked, and don't even exist when the object is returned. It was easier to explain what a Lisp Machine Lisp closure was, because there was an idea that a closure existed as something separate from a function. One could do more than invoke the closure's function; there were things like SYMEVAL-IN-CLOSURE, for example. So a closure could be explained as a set of bindings, which is much more palatable than saying that it is a function. Those were sets of dynamic bindings. What I would like to say to people who ask what a lexical closure is in Common Lisp is that it is a set of lexical bindings; that is, the set of lexical bindings in the environment in which a function is defined. Then we could say that FUNCTION returns, not a lexical closure, but an object that includes a lexical closure, and that the environment in which the function text is evaluated is the union of that lexical closure with the bindings of the formals to the actuals. I'd like to see this definition in a future edition of the CL manual, on the grounds that it's cleaner and therefore easier to teach. Would anyone object?  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 26 Jan 85 22:16:13 PST Received: from IMSSS by Score with Pup; Sat 26 Jan 85 22:15:36-PST Date: 26 Jan 1985 2214-PST From: Rem@IMSSS Subject: Character macros, erratum To: COMMON-LISP%SU-AI@SCORE According to the index in the CL book, topic "characters", subtopic "macros" is on pages 246-351 inclusive. This is obviusly a typo. Does somebody have a collected list of errata that I could use to fixup my copy of the book en masse to avoid bumping into already-found errata? I think I asked this general question when I first found out about this mailing list, but nobody knew of any such errata collection. [Erratum of my own, which I cannot correct in NETMSG=SNDMSG: obviusly -> obviously] -------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 26 Jan 85 17:53:13 PST Received: ID ; Sat 26 Jan 85 20:54:38-EST Date: Sat, 26 Jan 1985 20:54 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: REM%IMSSS@SU-SCORE.ARPA Cc: COMMON-LISP@SU-AI.ARPA Subject: how to document yellow page entries, belated discussion by REM In-reply-to: Msg of 85-01-25 07:54:58 PST (=GMT-8hr) from Robert Elton Maas I totally disagree with your statement that *any* use of ;-style comments in an otherwise readable and printable file is bad programming style. If there is some reason you want to process or save such comments when the file is read, it is a trivial matter to redefine the character macro for #\; to pack the comment line into a string and return it in some form that the reader will find unobjectionable. So I see no reason why ;-style comments are not a good vehicle for header information in yellow pages files. The default Common Lisp reader discards these things, but that's what you want most of the time. If you want a program to extract and save this stuff, building a reader to do that is well within the capabilities of most beginning Lisp programmers. -- Scott  Sent: to SU-AI.ARPA by IMSSS.? via ETHERNET with PUPFTP; 85-01-25 07:57:12 PST (=GMT-8hr) Date: 85-01-25 07:54:58 PST (=GMT-8hr) Message-id: SU-IMSSS.REM.A132012160130.G0348 From: Robert Elton Maas To:Fahlman@CMU-CS-C CC:COMMON-LISP@SU-AI Subject: how to document yellow page entries, belated discussion by REM Reply-to: REM%IMSSS@SU-SCORE.ARPA > Date: Thu, 24 Jan 1985 21:18 EST > From: "Scott E. Fahlman" > > ;;; Program name: Gritch Generator > ;;; Keywords: foo, bar, grumble, bletch > ;;; Author: James Madison > ;;; Description: This is an example of a long field that goes onto more > ;;; than one line. The subsequent lines have this extra bit of > ;;; indentation after the triple-semicolon stuff. One disadvantage of this format, as any ;-style comments, is that when you copy a file using the LISP reader, such as to prettyprint or to translate to another dialect, or to convert into or out of RLISP form, or to expand some private macros, etc., all those comments are stripped off during READing and thus will be missing from the output version of the program. Do you know of a LISP pseudo-reader that somehow internalizes sections of a file like READ does except that ;-style comments are retained, somehow attached to the normal s-expressions, so that when the file is PRINTed or PRETTYPRINTed etc. back out using a correspondingly-modified version of the appropriate output function the ;-style comments are put back in? If not, I would say that *any* use of ;-style comments in an otherwise READable&PRINTable LISP-source file is bad programming style since it limits the kinds of processing that can effectively be done on the file. (CC to COMMON-LISP general forum to spark debate on this controversial issue.)  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 24 Jan 85 18:00:07 PST Received: from SCRC-EUPHRATES by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 165974; Thu 24-Jan-85 21:01:32-EST Date: Thu, 24 Jan 85 21:00 EST From: David A. Moon Subject: Make-Hash-Table To: Skef Wholey cc: Common-Lisp@SU-AI.ARPA, Common-Lisp-Implementation@SU-AI.ARPA, bug-CLCP@SCRC-STONY-BROOK.ARPA In-Reply-To: Message-ID: <850124210049.0.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Tue, 22 Jan 1985 22:56 EST From: Skef Wholey I propose the following clarification to the specification of how the :rehash-threshold and :rehash-size options interact: There are four cases: 1. :rehash-threshold is an integer and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), the rehash-threshold is scaled up by multiplying it by the ceiling of the ratio of the new size to the original size. 2. :rehash-threshold is an integer and :rehash-size is a float. In this case, when a hash table is grown (by multiplying the current hash table size by the rehash-size), the rehash-threshold is scaled up by multiply it, too, by the rehash-size. Don't forget to convert the result of that multiplication from a float back to an integer! 3. :rehash-threshold is a float and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), we just leave the rehash-threshold alone. 4. :rehash-threshold is a float and :rehash-size is a float. To grow, just multiply the current hash table size by the rehash-size and again leave the rehash-threshold alone. If :rehash-threshold is a fixnum, then the hash table is grown when the number of entries in the table exceeds the :rehash-threshold. If :rehash-threshold is a float, then the hash table is grown when the ratio of the number of entries in the to the size of the table exceeds the :rehash-threshold. I think you can eliminate cases 1 and 2 by the trick of doing (when (integerp rehash-threshold) (setq rehash-threshold (/ (float rehash-threshold) size))) at initialization. Now it scales automatically. This seems to be the "obvious" interpretation of the description on pages 283-284, and this is how I intended to implement hash tables for Spice Lisp. I believe this interpretation. Bug-CLCP: We don't seem to implement any case other than #4. This is mentioned in the release notes.  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 24 Jan 85 18:00:07 PST Received: from SCRC-EUPHRATES by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 165974; Thu 24-Jan-85 21:01:32-EST Date: Thu, 24 Jan 85 21:00 EST From: David A. Moon Subject: Make-Hash-Table To: Skef Wholey cc: Common-Lisp@SU-AI.ARPA, Common-Lisp-Implementation@SU-AI.ARPA, bug-CLCP@SCRC-STONY-BROOK.ARPA In-Reply-To: Message-ID: <850124210049.0.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Tue, 22 Jan 1985 22:56 EST From: Skef Wholey I propose the following clarification to the specification of how the :rehash-threshold and :rehash-size options interact: There are four cases: 1. :rehash-threshold is an integer and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), the rehash-threshold is scaled up by multiplying it by the ceiling of the ratio of the new size to the original size. 2. :rehash-threshold is an integer and :rehash-size is a float. In this case, when a hash table is grown (by multiplying the current hash table size by the rehash-size), the rehash-threshold is scaled up by multiply it, too, by the rehash-size. Don't forget to convert the result of that multiplication from a float back to an integer! 3. :rehash-threshold is a float and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), we just leave the rehash-threshold alone. 4. :rehash-threshold is a float and :rehash-size is a float. To grow, just multiply the current hash table size by the rehash-size and again leave the rehash-threshold alone. If :rehash-threshold is a fixnum, then the hash table is grown when the number of entries in the table exceeds the :rehash-threshold. If :rehash-threshold is a float, then the hash table is grown when the ratio of the number of entries in the to the size of the table exceeds the :rehash-threshold. I think you can eliminate cases 1 and 2 by the trick of doing (when (integerp rehash-threshold) (setq rehash-threshold (/ (float rehash-threshold) size))) at initialization. Now it scales automatically. This seems to be the "obvious" interpretation of the description on pages 283-284, and this is how I intended to implement hash tables for Spice Lisp. I believe this interpretation. Bug-CLCP: We don't seem to implement any case other than #4. This is mentioned in the release notes.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Jan 85 19:55:19 PST Received: ID ; Tue 22 Jan 85 22:56:36-EST Comments: This message delayed at SU-AI by error in distribution-list file Date: Tue, 22 Jan 1985 22:56 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Common-Lisp@SU-AI.ARPA, Common-Lisp-Implementation@SU-AI.ARPA Subject: Make-Hash-Table (I sent a version of this message earlier today, which apparently never got back to me or anyone else at CMU. Here's another try. I added a new paragraph while resending.) I propose the following clarification to the specification of how the :rehash-threshold and :rehash-size options interact: There are four cases: 1. :rehash-threshold is an integer and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), the rehash-threshold is scaled up by multiplying it by the ceiling of the ratio of the new size to the original size. 2. :rehash-threshold is an integer and :rehash-size is a float. In this case, when a hash table is grown (by multiplying the current hash table size by the rehash-size), the rehash-threshold is scaled up by multiply it, too, by the rehash-size. 3. :rehash-threshold is a float and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), we just leave the rehash-threshold alone. 4. :rehash-threshold is a float and :rehash-size is a float. To grow, just multiply the current hash table size by the rehash-size and again leave the rehash-threshold alone. If :rehash-threshold is a fixnum, then the hash table is grown when the number of entries in the table exceeds the :rehash-threshold. If :rehash-threshold is a float, then the hash table is grown when the ratio of the number of entries in the to the size of the table exceeds the :rehash-threshold. This seems to be the "obvious" interpretation of the description on pages 283-284, and this is how I intended to implement hash tables for Spice Lisp. I believe I broke the behavior sometime when I was trying to eliminate non-fixnum arithmetic from the system while bootstrapping a new instruction set. So, those with our code: beware. I'll fix it soon. Oh, I noticed there's a funny typo in the index. On page 461, instead of ":rehash-threshold keyword," it says ":rehash-threshold keyboard." I wonder if there are any others like that. --Skef  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Jan 85 19:55:19 PST Received: ID ; Tue 22 Jan 85 22:56:36-EST Comments: This message delayed at SU-AI by error in distribution-list file Date: Tue, 22 Jan 1985 22:56 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Common-Lisp@SU-AI.ARPA, Common-Lisp-Implementation@SU-AI.ARPA Subject: Make-Hash-Table (I sent a version of this message earlier today, which apparently never got back to me or anyone else at CMU. Here's another try. I added a new paragraph while resending.) I propose the following clarification to the specification of how the :rehash-threshold and :rehash-size options interact: There are four cases: 1. :rehash-threshold is an integer and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), the rehash-threshold is scaled up by multiplying it by the ceiling of the ratio of the new size to the original size. 2. :rehash-threshold is an integer and :rehash-size is a float. In this case, when a hash table is grown (by multiplying the current hash table size by the rehash-size), the rehash-threshold is scaled up by multiply it, too, by the rehash-size. 3. :rehash-threshold is a float and :rehash-size is an integer. In this case, when a hash table is grown (by adding rehash-size to the current hash table size), we just leave the rehash-threshold alone. 4. :rehash-threshold is a float and :rehash-size is a float. To grow, just multiply the current hash table size by the rehash-size and again leave the rehash-threshold alone. If :rehash-threshold is a fixnum, then the hash table is grown when the number of entries in the table exceeds the :rehash-threshold. If :rehash-threshold is a float, then the hash table is grown when the ratio of the number of entries in the to the size of the table exceeds the :rehash-threshold. This seems to be the "obvious" interpretation of the description on pages 283-284, and this is how I intended to implement hash tables for Spice Lisp. I believe I broke the behavior sometime when I was trying to eliminate non-fixnum arithmetic from the system while bootstrapping a new instruction set. So, those with our code: beware. I'll fix it soon. Oh, I noticed there's a funny typo in the index. On page 461, instead of ":rehash-threshold keyword," it says ":rehash-threshold keyboard." I wonder if there are any others like that. --Skef  Received: from DEC-HUDSON.ARPA by SU-AI.ARPA with TCP; 23 Jan 85 07:43:21 PST Date: Wed, 23 Jan 85 10:45:43 EST From: greek@DEC-HUDSON Subject: MAKE-HASH-TABLE To: common-lisp@su-ai I think Skef's got it. That's the way I implemented it too. - Paul  Received: from DEC-HUDSON.ARPA by SU-AI.ARPA with TCP; 22 Jan 85 12:01:28 PST Date: Tue, 22 Jan 85 15:03:42 EST From: greek@DEC-HUDSON Subject: Problems with MAKE-HASH-TABLE To: common-lisp@su-ai No, Dan, honest, I don't understand what the manual is saying. I'm really asking people for clarification, really, really. - Paul  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 22 Jan 85 11:12:33 PST Received: by mit-eddie.Mit-chaos.Arpa id AA29565; Tue, 22 Jan 85 14:10:26 est Received: by godot with CHAOS id AA11012; Tue, 22 Jan 85 14:05:49 est Date: Tuesday, 22 January 1985, 14:06-EST From: Dan Aronson Subject: Problems with MAKE-HASH-TABLE To: mit-eddie!greek%DEC-HUDSON.ARPA@GODOT Cc: mit-eddie!common-lisp%su-ai.ARPA@GODOT In-Reply-To: <8501221627.AA28645@mit-eddie.Mit-chaos.Arpa> Paul, If you are complaining about how the definition of :REHASH-THRESHOD is written I agree. It seems that we both understand what it means but the description is incorrect. --Dan Aronson  Received: from DEC-HUDSON.ARPA by SU-AI.ARPA with TCP; 22 Jan 85 08:19:00 PST Date: Tue, 22 Jan 85 10:43:46 EST From: greek@DEC-HUDSON Subject: Problems with MAKE-HASH-TABLE To: common-lisp@su-ai I got a response from Dan Aronson concerning my problems with the :REHASH-THRESHOLD argument to MAKE-HASH-TABLE. I still don't understand it. He said that when the threshold was an integer, then the rehash size also had to be an integer and the table should grow when it is greater than threshold/size percent full. Dan, did you mean threshold/rehash-size or threshold/original-size? Don't forget, the rehash size is the number of entries to ADD, not the new size. I think the book wants to say that the threshold is an integer greater than zero and less than the size after rehashing, not the rehash-size. I still don't know what to do if the threshold is a fraction. Is that a fraction of the original size or the rehashed size? I can use plenty of common sense when reading the description, but it still doesn't make any sense. - Paul  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 18 Jan 85 14:53:43 PST Received: ID ; Fri 18 Jan 85 17:54:58-EST Date: Fri, 18 Jan 1985 17:54 EST Message-ID: From: Steven To: common-lisp@SU-AI.ARPA Subject: Filtering calls The current language definition seems to have a small problem. I'll follow through a hack example - anyone with any imagination should see the problem (whether my example means anything is a separate issue). Let's assume that you have to write (VAR X) instead of X to reference a given variable (and, for convenience, that everything works like SETF). The global macro for VAR does the normal variable thing. DEFMETHOD could then *filter* through the calls to VAR for those of instance variables, replacing them with calls to a macro IV, and leaving the rest for the global macro. How can we set up such a filter? We can't just deal with the global definition, because someone might want to filter the remaining VARs in some code wrapped around ours. Macrolet would cause the remaining VARs to be examined again and again... I think we need some way to deal with expansion functions explicitly, but I'm not sure. -- Steve  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 16 Jan 85 04:23:17 PST Received: from Semillon.ms by ArpaGateway.ms ; 16 JAN 85 04:24:30 PST Date: 16 Jan 85 04:23 PST From: JonL.pa@XEROX.ARPA Subject: Silicon Valley Syndrome To: Common-Lisp@SU-AI.ARPA cc: JonL.pa@XEROX.ARPA Today I am returning to the Common Lisp world by joining Lucid, Inc., a new start-up company founded by Dick Gabriel. Lucid will be developing Common Lisp systems for a variety of different hardwares, as well as doing other related software R&D. I've enjoyed the past 2 1/2 years working on Interlisp-D at Xerox, and am looking forward to the many opportunities in this growing new company. I invite you all to contact me, after today, at Jon L White Lucid, Inc. 1090 East Meadow Circle Palo Alto CA 94303 415-424-8855 For a period of time, my ArpaNet mail addresses will be JONL@MC JLW@SU-AI -- JonL --  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 5 Jan 85 10:07:06 PST Date: Saturday, 5 January 1985 04:19-EST From: Alan Bawden To: Bill Gosper Cc: BUG-LISPM at SWW-WHITE, common-lisp at SU-AI, DCP at SCRC-QUABBIN, numerics at SPA-NIMBUS Subject: Irrational GCD Date: Friday, 28 December 1984, 01:28-PST From: Bill Gosper Date: Wednesday, 19 December 1984, 23:45-EST From: David C. Plummer in disguise [(GCD 1\4 1\3) errs.] . . . BTW, the correct answer is 1\12. Not as far as ZL is concerned. Nor CL for that matter. Ouch! They must have been jet-lagged at those CL meetings. As Schroeppel pointed out several millenia back, (GCD A\B C\D) = (GCD A C)/(LCM B D), assuming args are in lowest terms. MACSYMA has long known this. You know Gosper, I had always assumed that people who proposed to extend GCD to the rationals were just confused about just what GCD really was, so I was quite surprised to hear that this was an idea endorsed by Schroeppel. So I thought about it for a while... In what sense must the GCD of 1/3 and 1/4 be 1/12? Suppose I pick some subring of Q that contains both 1/3 and 1/4 and I look at the ideal generated by 1/3 and 1/4. Certainly that ideal is the same as the ideal generated by 1/12, and in general the formula you give above will always yeild such a generator, but there are many other elements in the ring that generate that ideal. In fact, since 1/12 is a unit in any subring of Q, that ideal must be generated by 1, so why not say GCD(1/3,1/4)=1? What is the additional constraint you impose that requires 1/12? Well, suppose that instead of thinking about GCD as defined in terms of generators of ideals in subrings of Q, we think about GCD as defined in terms of generators of Z-modules contained in Q. In that case, the formula above gives the unique (modulo sign) generator of the module generated by the arguments to GCD. But that seems somewhat arbitrary; suppose I was working with some ring other than Z, call it A, and I wanted to play with A-modules contained in Q? No problem! Although the formula can't always give you a unique generator, it will give you -one- of the generators, which is certainly the best you can ask for. Summary: I'm convinced. This really is the right way to extend GCD to the rational numbers if you are going to do it at all. (Since my suggestions that GCD, NUMERATOR and DENOMINATOR be extended in the -only- possible way to the complex integers were ignored, I have little hope that Common Lisp will adopt the idea, however.) But the absoLULU is further down page 202 of the CL book: "Mathematically, (lcm) should return infinity." This is so wildly absurd that I can't even guess the fallacy that led to it. It should simply be 1. It is clearly a result of taking the identity (LCM ...) = (/ (ABS (* ...)) (GCD ...)) too seriously.  Received: from UTAH-CS.ARPA by SU-AI.ARPA with TCP; 2 Jan 85 13:56:53 PST Received: from utah-orion.ARPA by utah-cs.ARPA (4.42/4.40.1) id AA29829; Wed, 2 Jan 85 14:56:01 MST Received: by utah-orion.ARPA (4.42/4.40.1) id AA09462; Wed, 2 Jan 85 14:51:18 MST Date: Wed, 2 Jan 85 14:51:18 MST From: kessler%utah-orion@utah-cs (Robert Kessler) Message-Id: <8501022151.AA09462@utah-orion.ARPA> Cc: common-lisp@su-ai.arpa Subject: re: Yellow pages -------- > What information do we want to include with each program? My initial > list is as follows: > > A copyright notice or statement that the code is in the public domain. I agree that a copyright notice along with the appropriate release. > Author's name. And address. > Current maintainer, with mail and net address. > Description of the program (optional if there is a document). I would still require a brief description. > Notes on any non-portable aspects of the code. > Known bugs and ideas for future extension. > Log of significant changes from one release to another. > Random comments. > > Anything else? Should this be in the same file as the code (in a big > initial comment), or a separate-but-related file? Should we try to come It should be in the same file. Its hard enough tracking down code across multiple files, without the necessity of looking for the documentation in some other place. > up with some syntax for making the fields easily identifiable by > machine, or is human-readability sufficient? I would make a standard form of comments where there is a keyword before each piece of information. Then a fairly simple program can have access. We use a scheme devised at HP which creates a header like the following, along with a revision list: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % File: MINI-SUPPORT.SL % Description: A Small Meta System (Support routines for mini.min/mini.sl) % Author: Robert Kessler, Utah PASS Project % Created: 01-Jan-79 % Modified: 9-Mar-84 15:05:06 (Robert Kessler) % Mode: Lisp % Status: Experimental (Do Not Distribute) % % (c) Copyright 1979, University of Utah, all rights reserved. >>>> Put in some kind of release statement % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Revisions: % % 9-Mar-84 15:03:40 (Robert Kessler) % Added fluid definition of !*writingfaslfile, and modified % rule-define, so it does a boundp check before accessing the value of % the variable. For strange reasons, this a call to an undefined % function if the compiler wasn't loaded. % 27-Feb-84 11:58:47 (Robert Kessler) % Added Legal-single-char to check to see if a keyword is an % alphanumeric character. We do not want to make an alphanumeric % character a keyword. This would make words with that letter in it % invalid. Therefore, if the grammar definer makes something a % keyword (like 't), it must be separated by other delimiters. % ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The NMODE text editor automatically updates the last modified line any time you write the file out. Bob. ------- End of Forwarded Message  Received: from UTAH-CS.ARPA by SU-AI.ARPA with TCP; 2 Jan 85 13:32:04 PST Received: from utah-orion.ARPA by utah-cs.ARPA (4.42/4.40.1) id AA29346; Wed, 2 Jan 85 14:30:55 MST Received: by utah-orion.ARPA (4.42/4.40.1) id AA09300; Wed, 2 Jan 85 14:28:12 MST Message-Id: <8501022128.AA09300@utah-orion.ARPA> Date: 2 Jan 85 14:15 MST From: Robert Kessler To: fahlman@cmu-cs-c Cc: common-lisp@su-ai Subject: re: Yellow pages > What information do we want to include with each program? My initial > list is as follows: > > A copyright notice or statement that the code is in the public domain. I agree that a copyright notice along with the appropriate release. > Author's name. And address. > Current maintainer, with mail and net address. > Description of the program (optional if there is a document). I would still require a brief description. > Notes on any non-portable aspects of the code. > Known bugs and ideas for future extension. > Log of significant changes from one release to another. > Random comments. > > Anything else? Should this be in the same file as the code (in a big > initial comment), or a separate-but-related file? Should we try to come It should be in the same file. Its hard enough tracking down code across multiple files, without the necessity of looking for the documentation in some other place. > up with some syntax for making the fields easily identifiable by > machine, or is human-readability sufficient? I would make a standard form of comments where there is a keyword before each piece of information. Then a fairly simple program can have access. We use a scheme devised at HP which creates a header like the following, along with a revision list: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % File: MINI-SUPPORT.SL % Description: A Small Meta System (Support routines for mini.min/mini.sl) % Author: Robert Kessler, Utah PASS Project % Created: 01-Jan-79 % Modified: 9-Mar-84 15:05:06 (Robert Kessler) % Mode: Lisp % Status: Experimental (Do Not Distribute) % % (c) Copyright 1979, University of Utah, all rights reserved. >>>> Put in some kind of release statement % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % Revisions: % % 9-Mar-84 15:03:40 (Robert Kessler) % Added fluid definition of !*writingfaslfile, and modified % rule-define, so it does a boundp check before accessing the value of % the variable. For strange reasons, this a call to an undefined % function if the compiler wasn't loaded. % 27-Feb-84 11:58:47 (Robert Kessler) % Added Legal-single-char to check to see if a keyword is an % alphanumeric character. We do not want to make an alphanumeric % character a keyword. This would make words with that letter in it % invalid. Therefore, if the grammar definer makes something a % keyword (like 't), it must be separated by other delimiters. % ... %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The NMODE text editor automatically updates the last modified line any time you write the file out. Bob.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 2 Jan 85 07:38:00 PST Received: ID ; Wed 2 Jan 85 10:36:10-EST Date: Wed, 2 Jan 1985 10:36 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: fateman%ucbdali@Berkeley (Richard Fateman) Cc: common-lisp@SU-AI.ARPA Subject: Yellow pages In-reply-to: Msg of 1 Jan 1985 23:03-EST from fateman%ucbdali at Berkeley (Richard Fateman) Good point. Public domain seems to work just fine, but a copyright by the author with all of the permissions you list would have much the same effect. Yes, we really do have to get us a lawyer; not having one to consult at CMU has already led to some mistakes on this stuff, I think. I guess some of the DARPA "seed money" for setting up this organization is going to have to flow in that direction, unless some company out there can loan us a lawyer on occasion -- doubtful, since this could lead to conflict of interest. -- Scott  Received: from UCB-VAX.ARPA by SU-AI.ARPA with TCP; 1 Jan 85 20:03:16 PST Received: from ucbdali.ARPA by UCB-VAX.ARPA (4.24/4.40) id AA28566; Tue, 1 Jan 85 19:56:21 pst Received: by ucbdali.ARPA (4.24/4.40) id AA24419; Tue, 1 Jan 85 20:03:13 pst Date: Tue, 1 Jan 85 20:03:13 pst From: fateman%ucbdali@Berkeley (Richard Fateman) Message-Id: <8501020403.AA24419@ucbdali.ARPA> To: Fahlman@CMU-CS-C.ARPA, common-lisp@SU-AI.ARPA Subject: Re: Yellow pages I think the best legal notice would include the copyright by the author or owner, followed by a permission to reproduce, redistribute, revise, etc. The fewer conditions the better. One kind of condition that some people have a lot of trouble with is one that requires that all revisions are public domain... it could mean that if that code is merged with other (proprietary) code, it all becomes public domain. E.g. how would Lucid feel about using a yellow-pages editor if it meant their compiler became public... Where is the lawyer on this project, anyway?  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 1 Jan 85 19:15:52 PST Received: ID ; Tue 1 Jan 85 22:16:47-EST Date: Tue, 1 Jan 1985 22:16 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Subject: Yellow pages On several occasions I have started to collect all of the random code files lying around here into an online libarary that, if it grows and prospers, would eventually become the Yellow pages (or one version of the yellow pages, anyway). Then I get to thinking about what sort of boilerplate to include in these files, and I end up putting the whole thing off until I have time to think about this. Well, here we go again, and this time I'll ask for advice: What information do we want to include with each program? My initial list is as follows: A copyright notice or statement that the code is in the public domain. Author's name. Current maintainer, with mail and net address. Description of the program (optional if there is a document). Notes on any non-portable aspects of the code. Known bugs and ideas for future extension. Log of significant changes from one release to another. Random comments. Anything else? Should this be in the same file as the code (in a big initial comment), or a separate-but-related file? Should we try to come up with some syntax for making the fields easily identifiable by machine, or is human-readability sufficient? Probably we'll have two library directories, one for truly portable stuff and another, much larger, for Perq-specific code (which still might be of considerable use to people writing similar programs). -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 29 Dec 84 05:59:45 PST Received: from SPA-LOS-TRANCOS by SPA-RUSSIAN via CHAOS with CHAOS-MAIL id 73170; Fri 28-Dec-84 23:38:17-PST Date: Fri, 28 Dec 84 23:38 PST From: Bill Gosper Subject: Irrational GCD To: common-lisp at SU-AI@MIT-MC.ARPA In-Reply-To: (failed mail)<841219234502.1.NFEP@NEPONSET.SCRC.Symbolics.COM> Message-ID: <841228233821.5.RWG@LOS-TRANCOS.SPA.Symbolics> Date: Wednesday, 19 December 1984, 23:45-EST From: David C. Plummer in disguise [(GCD 1\4 1\3) errs.] . . . BTW, the correct answer is 1\12. Not as far as ZL is concerned. Nor CL for that matter. Ouch! They must have been jet-lagged at those CL meetings. As Schroeppel pointed out several millenia back, (GCD A\B C\D) = (GCD A C)/(LCM B D), assuming args are in lowest terms. MACSYMA has long known this. But the absoLULU is further down page 202 of the CL book: "Mathematically, (lcm) should return infinity." This is so wildly absurd that I can't even guess the fallacy that led to it. It should simply be 1.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Dec 84 12:14:37 PST Received: ID ; Sat 22 Dec 84 15:15:19-EST Date: Sat, 22 Dec 1984 15:15 EST Message-ID: From: Rob MacLachlan To: "Bernard S. Greenberg" Cc: common-lisp@SU-AI.ARPA, lisp-designers%SCRC-QUABBIN@MIT-MC.ARPA Subject: No, I'm not joking In-reply-to: Msg of 22 Dec 1984 11:30-EST from Bernard S. Greenberg The syntax given for FLET definitely disallows any body declarations. The manual may be consistent in forbidding declarations in FLET, depending on how the definition of DECLARE (153-154) is interpreted. All that is precisely stated is that FLET, LABELS and MACROLET do allow declarations, which is indeed true, since they allow declarations within the definitions of the local functions or macros. If this is the case, then the description of where declarations appear should probably be made more specific or less specific so that it doesn't appear to be saying something which is untrue. Although declarations may not be allowed in FLET, it is untrue that to allow them would be useless. The INLINE declaration could be quite meaningfully applied. Rob  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Dec 84 11:52:03 PST Received: ID ; Sat 22 Dec 84 14:22:50-EST Date: Sat, 22 Dec 1984 10:37 EST Message-ID: From: Rob MacLachlan To: "Bernard S. Greenberg" Cc: common-lisp@SU-AI.ARPA, lisp-designers@SCRC-QUABBIN.ARPA Subject: No, I'm not joking In-reply-to: Msg of 20 Dec 1984 14:53-EST from Bernard S. Greenberg What do you want to use FLET* for that LABELS won't do? There obviously is a difference, but I can't think of a place where the difference is useful. Rob  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 22 Dec 84 11:37:00 PST Date: 22 Dec 1984 1130-EST From: Bernard S. Greenberg Subject: Re: No, I'm not joking To: RAM%CMU-CS-C@STANFORD cc: common-lisp@SAIL, lisp-designers%SCRC-QUABBIN%MIT-MC@STANFORD In-Reply-To: DATE: Sat, 22 Dec 1984 10:37 EST From: Rob MacLachlan ; WHAT do you want to use FLET* for that LABELS won't do? There obviously is a difference, but I can't think of a place where the difference is useful. We've had some internal discussion on this, here. Inconclusive, but I feel less strongly, as a result. Let me try to summarize some of the discussion, and some of my feelings. LABELS implies full recursivity and mutual recursivity. This may be harder than simply-nested single FLET's for some implementations, vis-a-vis allocation of closures, sharing of stack frames, etc. Some implementations might say, for LABELS, "Oh this the hard case", and do something less efficient than either full analysis or nested FLET's would allows. There is also a conceptual issue, that when I see LABELS, *I* say "Ohh, something trickily recursive or mutually-recursive is going to happen here!" This may be sheer prejudice, and completely unreasonable. This may be the heart of the issue. Then there is the parallel to LET "issue", which is admittedly not an issue, because there is no effect of "executing" the function definition. "So you can always use nested FLET's, if you think that's clearer." But that plays havoc with the indentation (that *IS* an issue). Then there was this comic dialogue between Moon and myself, "Oh, so who needs LET*, you can use nested LET's". "But you can't, because LET* allows declarations". So same for FLET*, except there's nothing you could possibly want to declare, out of the current set of CL declarations. But FLET doesn't allow body declarations, anyway! Well, on one page the CLM says it does, and on another it doesn't, (we implement "no, it doesn't").... -------  Received: from SCRC-QUABBIN.ARPA by SU-AI.ARPA with TCP; 21 Dec 84 22:35:23 PST Received: from SCRC-CONCORD by SCRC-QUABBIN via CHAOS with CHAOS-MAIL id 115481; Thu 20-Dec-84 14:57:18-EST Date: Thu, 20 Dec 84 14:53 EST From: "Bernard S. Greenberg" Subject: No, I'm not joking To: lisp-designers@SCRC-QUABBIN.ARPA, common-lisp@SU-AI.ARPA Message-ID: <841220145308.8.BSG@CONCORD.SCRC.Symbolics.COM> FLET* wants very badly to be.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Dec 84 06:19:27 PST Received: by mit-eddie.Mit-chaos.Arpa id AA02521; Fri, 21 Dec 84 09:08:47 est Received: by godot with CHAOS id AA24059; Fri, 21 Dec 84 08:54:43 est Date: Friday, 21 December 1984, 08:54-EST From: Dan Aronson Reply-To: mit-eddie!Dan%godot%mit-mc at AQUINAS Subject: Does anyone understand :REHASH-THRESHOLD? To: mit-eddie!common-lisp%su-ai.arpa@GODOT In-Reply-To: <8412201513.AA21378@mit-eddie.Mit-chaos.Arpa> I've just reread the definitions of :REHASH-SIZE and :REHASH-THRESHOLD in the silver book and they both seem perfectly clear. If :REHASH-THRESHOLD is an integer less than :REHASH-SIZE (where :REHASH-SIZE must be an integer) then when the table grows greater then threshold/size percent it is grown. However, if :REHASH-THRESHOLD is a fraction, then that is the percentage that is used. I think that common sense has to be assumed when reading some of this stuff. --Dan Aronson  Received: from DEC-HUDSON.ARPA by SU-AI.ARPA with TCP; 20 Dec 84 07:17:29 PST Date: Thu, 20 Dec 84 10:11:20 EST From: greek@DEC-HUDSON Subject: Does anyone understand :REHASH-THRESHOLD? To: common-lisp@su-ai I was just working on the VAX LISP hashing functions when I noticed that the description of :REHASH-THRESHOLD has deteriorated a bit since the Colander edition. The silver book says that it can be an integer greater than zero and less than the :REHASH-SIZE. This makes no sense when you read the description of :REHASH-SIZE. It also says it can be a float between zero and one. Is this a percentage of the size or the rehash size? - Paul  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 19 Dec 84 22:50:29 PST Date: 20 Dec 84 0149 EST (Thursday) From: Guy.Steele@CMU-CS-A.ARPA To: common-lisp@SU-AI.ARPA Subject: Do I really know anything about Common LISP? In-Reply-To: "Robert Elton Maas's message of 17 Dec 84 00:43-EST" REM, On the contrary: I remember explicitly thinking about L-to-R evaluation several times while writing the book, but somehow the words I intended at one time to write never appeared in the book. One can intend all kinds of things, or fail to intend all kinds of things, but a book that big is so complicated it's all too likely you'll mess upo somewhere, whether in transcribing other people's ideas or your own. That's why we have EMACS instead of :COPY TTY:,DSK: (and, for that matter, DDT as well as MIDAS) [non-MIT people please excuse the lapse into ITS terminology]. --Guy  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 15 Dec 84 23:14:24 PST Received: from Burger.ms by ArpaGateway.ms ; 15 DEC 84 23:10:54 PST From: masinter.pa@XEROX.ARPA Date: 15 Dec 84 23:10:43 PST Subject: recruiting To: common-lisp@SU-AI.ARPA I know this is in violation of whatever, but I thought I'd risk it anyway.... If you didn't know, Xerox is actively recruiting Lisp wizards. If you're interested in exploring your options, give me a call (415) 494-4365, or drop a line, or whatever...  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 12 Dec 84 23:47:17 PST Date: 13 Dec 84 0232 EST (Thursday) From: Guy.Steele@CMU-CS-A.ARPA To: DALTON FHL (on ERCC DEC-10) Subject: Re: Manual does say left to right CC: common-lisp@SU-AI.ARPA In-Reply-To: <131716-530-041@EDXA> More and more I am reminded of the incident in which Asimov attended a lecture on the subject of one of his short stories. The lecturer made several remarks of the form "this part of the story has thus-and-so purpose" and "clearly the author had X in mind in writing that passage". Afterward Asimov came up to the lecturer and politely disagreed with some of these points, suggesting other interpretations. They had a friendly discussion but the lecturer did not yield, and finally Asimov played his trump card: "Well, I think I might know what the author had in mind because I am the author!" The lecturer smiled and said, "Oh, my goodness, I am very glad to meet you, Dr. Asimov; but tell me: just because you wrote the story, what makes you think you know anything about it?" Well, just because I wrote all the words in the CL manual doesn't mean I know any more than anyone else about what's in it. I'm sure a good deal of work remains for careful reviewers, critics, and exegetes. --Guy  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 12 Dec 84 23:46:36 PST Date: 13 Dec 84 0219 EST (Thursday) From: Guy.Steele@CMU-CS-A.ARPA To: Rem%IMSSS@SU-SCORE.ARPA Subject: I'm the cretin CC: common-lisp@SU-AI.ARPA In-Reply-To: "Rem@IMSSS's message of 23 Nov 84 17:41-EST" I agree that the code font in the CL manual is terrible. However, everything else the typesetter had to offer me was worse. We spent about a month just trying to cobble together pieces of several fonts that didn't look too terrible together and between them provided the full printing ASCII character set in a fixed-width form. Commercial typesetters simply are not used to printing programs; there are very few good-looking fixed-pitch fonts. Indeed, there are very few fixed-pitch fonts, and most of them are purposely designed to look "computerish". Yuck. --Guy  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 12 Dec 84 23:07:26 PST Date: 13 Dec 84 0202 EST (Thursday) From: Guy.Steele@CMU-CS-A.ARPA To: "Scott E. Fahlman" Subject: Re: left to right order of evaluation CC: common-lisp@SU-AI.ARPA In-Reply-To: The first full paragraph on page 61 is discussing the application of a function to arguments, not the evaluation of argument forms in a function call. I read this paragraph as dealing with already-evaluated arguments, saying w nothing about how the arguments were produced. I, at least, have understood implicitly that Common Lisp would evaluate argument forms from left to right, but indeed the manual seems not to say anything explicit about it directly. The one reason for not enforcing this order might be to make life easier for certain multiprocessing implementations by invalidating programs that depend on such ordering. (In most cases I find it is poor style to depend critically on the left-to-right argument-evaluation ordering.) All things considered, however, I do not advocate eliminating this left-to-right ordering from the language if in fact we are all agreed that such ordering is now in the language definition. --Guy  Received: from CMU-CS-SPICE.ARPA by SU-AI.ARPA with TCP; 11 Dec 84 13:23:19 PST Date: Tuesday, 11 December 1984 16:18:24 EST From: Joseph.Ginder@cmu-cs-spice.arpa To: common-lisp@su-ai.arpa Subject: Re: left to right order of evaluation Message-ID: <1984.12.11.21.10.13.Joseph.Ginder@cmu-cs-spice.arpa> I distinctly remember having a discussion about this at one of the few Common Lisp meetings (in Pittsburgh?). As I recall, it was decided (with little debate) that left-to-right order of argument evaluation was adopted. While the decision regarding this issue in particular does not necessarily imply strict left-to-right evaluation, this topic arose when someone at the meeting asked if the default value expressions for optional arguments could refer to an argument preceding the optional in the formal parameter list (in left-to-right order, of course). This led to the topic of whether, in general, left-to-right evaluation order was required. The answer to both questions, as I recall, was yes. Rather than depend on my or someone else's memory, did anyone keep notes of the meetings? I think Guy did. --Joe P.S. Not that this recollection solves the problem of deciding whether or not the decision of the committee was the "right" one or not...... -- if such matters are open to change. (I hope not.)  Received: from [14.0.0.9] by SU-AI.ARPA with TCP; 26 Nov 84 09:54:36 PST Received: from edxa.ac.uk by 44d.Ucl-Cs.AC.UK via Janet with NIFTP id a001884; 26 Nov 84 16:47 GMT From: DALTON FHL (on ERCC DEC-10) Date: Monday, 26-Nov-84 16:07:40-GMT Message-ID: <131716-530-041@EDXA> To: common-lisp , J.Dalton%edxa@ucl-cs.arpa Subject: Manual does say left to right -------- It turns out that The Book *does* explicitly specify that argument forms are evaluated from left to right; but not, unfortunately, in any place as obvious as page 61. If you look on page 194, 3rd full paragraph, you will find: For functions that are mathematically associative (and possibly commutative), a Common Lisp implementation may process the argumants in any manner consistent with assoc- iative (and possibly commutative) rearrangement. This does not affect the order in which the argument forms are eval- uated, of course; that order is always left to right, as in all Common Lisp function calls. ... This section is clearly talking about evaluating argument forms as distinct from processing arguments. Page 61, on the other hand is talking only about processing arguments. The meaning of page 61 can be clarified by juxtaposing it with part of page 58: [p. 58] ... Any and all remaining elements of the list are forms to be evaluated; one value is obtained from each form, and these values become the *arguments* to the function. The function is then *applied* to the arguments. ... [p. 61] When the function represented by the lambda-expression is applied to the arguments, the arguemnts and parameters are processed in order from left to right. ... (Isn't textual criticism fun? I knew those Comp Lit courses weren't for nothing. :-)) -- Jeff Dalton AI Applications Institute University of Edinburgh --------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 20:25:45 PST Received: ID ; Fri 23 Nov 84 23:24:34-EST Date: Fri, 23 Nov 1984 23:24 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Rem@MIT-MC.ARPA Cc: COMMON-LISP@SU-AI.ARPA Subject: Left to right, or parallel, evaluation of arguments to function? In-reply-to: Msg of 23 Nov 1984 17:41-EST from Rem at IMSSS Evaluation of arguments is definitely intended to be left-to-right. The only argument is about whether the manual actually comes out and says this (I think it does) or whether it just hints at this. In the latter case, the omission is just an oversight. It is not a goal of Common Lisp to be a good language for dataflow or other statement-level parallel architectures. There's no clear consensus on what the right set of features for a dataflow language would be, but Common Lisp, even if we loosened up the rules on argument evaluation, would not be the right thing. It seems silly to include some feature just because it makes Common Lisp slightly less bad at doing something it is not intended to do anyway, so we went with Lisp tradition in requiring left-to-right evaluation. -- Scott  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 20:08:43 PST Received: ID ; Fri 23 Nov 84 23:07:33-EST Date: Fri, 23 Nov 1984 23:07 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Rem@MIT-MC.ARPA Cc: COMMON-LISP@SU-AI.ARPA Subject: Being able to get list of pending CATCH tags? In-reply-to: Msg of 23 Nov 1984 17:44-EST from Rem at IMSSS I think that a PENDING-CATCH-TAGS function sounds pretty useful for debugging. It would be easy (for the implementors) to do in all the implementations I know about. I think it might be worthwhile for us to add this to our own implementaiton, though I am not sure it is worth putting into the white pages. Since low-level debugging stuff is inherently unportable, there's no compelling reason to add this function to the white pages. Note that BACKTRACE (or BAKTRACE, for you Maclisp fans) is not there either, though every self-respecting implementation will provide some way to examine the stack. -- Scott  Sent: to SU-AI.ARPA by IMSSS.? via ETHERNET with PUPFTP; 84-11-23 17:47:20 PST (=GMT-8hr) Date: 84-11-23 17:45:04 PST (=GMT-8hr) Message-id: SU-IMSSS.REM.A131714014450.G0398 From: Robert Elton Maas To: COMMON-LISP@SU-AI Subject: access to list of pending CATCH tags? Reply-to: "REM%IMSSS"@SU-SCORE.ARPA Date: 23 Nov 84 1519 PST From: Dick Gabriel Subject: List of pending catch tags Despite what many people believe, the Common Lisp designers have left some programs out of the Common Lisp spec, so you, the user, can have some fun too and write your own. That's rather hard to believe given the plethoria of functions I've never in my life used such as: ASINH ACOSH ATANH ... STRING-RIGHT-TRIM ... With all those rarely-used functions as REQUIRED functions in any CL implementation, it's hard to believe a basic debugging need like seeing which catch tags are pending has been left for the programmer to invent himself. More likely it was an oversight or poor design decision. Date: 23 November 1984 18:48-EST From: Kent M Pitman Subject: memo-catch ... However, I assume REM's comment was for the sake of debugging, since it might be helpful interactively in determining how to proceed a broken program that had been written with ordinary primitives. I admit my example of database application was answered by the MEMO-CATCH above, but indeed my original idea was more for debugging as KMP guessed correctly. Perhaps there should be two levels, a way of getting pending catch tags that you can *always* get from program code inside a pending catch, such as I requested an KMP concurs, for debugging; and an explicit user-level facility such as MEMO-CATCH. But MEMO-CATCH really should provide not just the list of tags, but also user-level documentation for each, such as "if you use this catch tag you'll lose all your database updates since the first of the preceding month" and "if you use this catch tag you'll not lose any updates but a wizard will have to patch the database back together before it can be used again" etc. Thus I'd like the built-in CATCH to provide a crude level of tags for the sake of debugging and graceful recovery from gross bugs during debugging, and the user can provide an elaborate mechanism for end-users. For what it's worth, a common argument against your MEMO-CATCH solution goes something like: ``If I'd known that was the catch I needed to memoize, I wouldn't have had to memoize it!'' Yup, I agree. If you need it for debugging, it should be there always (or at least by default during debugging) instead of requiring an explicit action each time to install it. That's the whole idea of the LISP interprtor, if you leave things out by mistake the debug-runtime environment has enough info around that you can diagnose the problem from inside the error without having to modify your source and recompile and/or reload.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 15:47:58 PST Date: 23 November 1984 18:48-EST From: Kent M Pitman Subject: memo-catch To: rpg @ SU-AI cc: common-lisp @ SU-AI Actually, you'd want to bind the tag list AFTER you enter the catch frame, but I agree it can be done. eg, (defmacro memo-catch (tag &body forms) (let ((g (gentemp))) `(let ((,g ,tag)) (*catch ,g (let ((*pending-tags* (cons ,g *pending-tags*))) ,@forms))))) However, I assume REM's comment was for the sake of debugging, since it might be helpful interactively in determining how to proceed a broken program that had been written with ordinary primitives. For what it's worth, a common argument against your MEMO-CATCH solution goes something like: ``If I'd known that was the catch I needed to memoize, I wouldn't have had to memoize it!'' -kmp  Date: 23 Nov 84 1519 PST From: Dick Gabriel Subject: List of pending catch tags To: rem@IMSSS, common-lisp@SU-AI.ARPA Despite what many people believe, the Common Lisp designers have left some programs out of the Common Lisp spec, so you, the user, can have some fun too and write your own. If you're interested in maintaining a list of pending catch tags, write it yourself with a special, called something like *PENDING-TAGS*. Then you can write: (defmacro memo-catch (tag form) `(let ((*pending-tags* *pending-tags*) (tag ,tag)) (push tag *pending-tags*) (catch tag ,form))) -rpg-  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 14:49:29 PST Received: from IMSSS by Score with Pup; Fri 23 Nov 84 14:48:11-PST Date: 23 Nov 1984 1444-PST From: Rem@IMSSS Subject: Being able to get list of pending CATCH tags? To: COMMON-LISP%SU-AI@SCORE Date: 21 November 1984 14:19-EST From: Kent M Pitman Subject: Stability? Sure, the compiler could easily figure out where an EQ-Throw could be used. I'll go and fix our implementation if everyone thinks I should. But that's not my idea of a stable language specification. Why not. No working program will break? Formerly undefined programs will become well-defined. There is nothing even remotely unstable about such a change. All changes should be so stable! Hey, you're thinking only about the users on a system with perfect implementators who can keep up with all the changes without breaking anything, not the poor implementators who must be so perfect. More than likely, after each incompatible change in the spec there'll be a period of years when not all implementations have caught up. If changes occur more often than that, users will never obtain a working up-to-date implementation, and thus despite a common latest-standard at any moment there will be no implementations satisfying any common standard. I think your above statement should be watered down considerably. Hey, why not have Throw take a keyword argument for the test to use? This reminds me of a serious deficiency in USLISP and PSL and I think Common-LISP too. There's no way for the user in a break loop to get a listing of pending CATCH tags and thus have some idea which one to throw to in order to unwind cleanly from a system-detected error. For example, catch tags in a database update application might include NORMAL-RETURN and ABORT-CATCH, where the latter would enter a menu that offers unwinding the update or closing files in their present state or rolling back to a checkpoint. The poor user might not have time to search the documentation to find the name of the catch tag. Surely being able to ask (PENDING-CATCH-TAGS) from the break package would be useful. I propose Common LISP include such a way of getting a list of currently pending catch tags by evaluation of some form like that. Please correct me if CL already provides this facility. -------  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 14:44:30 PST Received: from IMSSS by Score with Pup; Fri 23 Nov 84 14:43:10-PST Date: 23 Nov 1984 1441-PST From: Rem@IMSSS Subject: Left to right, or parallel, evaluation of arguments to function? To: COMMON-LISP%SU-AI@SCORE (If you want to reply but your host doesn't know how to reach REM@SU-SCORE, try REM%IMSSS@SCORE or if that doesn't work from your host then REM@MIT-MC. I eagerly await namedomains&servers which should alleviate this problem.) Date: Wed, 21 Nov 1984 15:42 EST From: "Scott E. Fahlman" Subject: left to right order of evaluation If I understand ... question, ... are asking whether, in a normal function call like "(g (foo))", the mapping from the symbol G to its definition occurs before or after the call to FOO, which might change the definition of G as a side-effect. I don't think that the manual currently addresses this issue. I would advocate leaving this decision to the implementor, I would state it a little more strongly. It is generally bad programming practice for a program to modify itself out from under itself. Most langauges keep program and data totally separate, even putting program on readonly pages. Some CPUs carry this further, using different segment registers for program and data, and having data-manipulation instructions UNABLE to fetch or store relative to the program base register. In LISP we can be a little lax, doing things like creating an s-expression and then passing it to EVAL a moment later, or loading and compiling a program qua data into memory and then immediately making use of the resultant compiled functions by calling them qua machine-language subroutines. But note that although with respect to the whole program we are treating "it" as both program and data at the same time, we always treat a given sub-form either as one or another at any given moment, and never (in those examples) modify a form in place while it is in the midst of being executed. At worst we execute it completely, then while outside of it we modify it in preparation for the next execution. In some extreme cases a macro expander may wish to modify the form to be faster next time, or a UUO link may be optimized during a call, but the semantics of the form (except for timing) are the same before an after the destructive modification. I therefore propose that we say "IT IS AN ERROR" to attempt to modify any piece of code (interpreted or compiled) to have different semantics, when that piece of code is currently in the midst of execution, including functions which are currently in a half-called state while their arguments are being recursively evaluated. I'm not sure whether or not we should explicitly allow any destructive modification which doesn't change semantics, after all such mod may screw up the PC, causing some parts of the current evaluation to be skipped or double-executed if data is compressed or expanded in the vicinity. Date: Wed, 21 Nov 1984 16:07 EST From: "Scott E. Fahlman" Subject: left to right order of evaluation I believe that the first full paragraph on page 61 of the aluminum edition specifies left-to-right evaluation of arguments for all function calls. I diasgree. The purpose of the paragraph is to resolve the question of which dummy arguments match which actual arguments, not the realtime matter of when the actual arguments are evaluated to yield actual values which are bound to dummy arguments. It's completely reasonable (to me it seems) that an implementation would push dotted pairs (dumarg . actarg) on a stack during resolution of argument matching as described in that paragraph, but then actually evaluate them in reverse order as they were popped off the stack and the evaluated results stored in another memory array-like place (stack, registers, whatever). Note there's no mention of EVALuating in that paragraph, nor of values, only of PROCESSING of parameters and arguments, where PROCESSING obviously (to me) means the matching process and resolution of &REST etc. arguments. Furthermore, insisting on a particular order of evaluation of arguments in the general case precludes implementing on a parallel processor such as a dataflow machine. It would be a royal pain to completely rewrite a program to replace nearly every function call in the whole program by an equivalent parallel-evaluation function call, whereas it would be not too bad to replace the very few places sequential evaluation is really needed by an explicit sequencial-argument-evaluation macro or special form. In typical usage, most cases of sequence falls in three classes: (1) Arguments must be evaluated before calling the function. SCHEME et al with lazy evaluators which may leave arguments not-completely-evaluated at the time a function is called present some problems here, but I think SCHEME simply has to be smart enough to assure that side-effects are emulated as if arguments had been evaluated before the function was called. (2) Explicit sequential forms such as PROG, PROGN, etc. These clearly must evaluate arguments in the prescribed sequence (or in SCHEME emulated to have equivalent semantics). (3) Case resolution forms such as COND. Either they must be evaluated in the expected order, or side-effects must be emulated as if they did. The only common errant cases seem to be: (4) Use of AND or OR to emulate COND, just to save a few parentheses. (5) Use of LIST to evaluate several forms in sequence and return a list of their results for visual inspction. I think AND OR LIST and all other functions should be permitted to evaluate their arguments in arbitrary order, including in parallel, and when cases 4 and 5 are needed special forms or macros should be used. Thus (AND (PAIRP FOO) (EQ (CAR FOO) 'FOO)) would be replaced by (SEQ-AND (PAIRP FOO) (EQ (CAR FOO) 'FOO)) or something similar, but (AND (LESSP (LENGTH LIST1) 500) (LESSP (LENGTH LIST2) 700)) could be run in parallel on some machines. I would like to see Common-LISP take the bold step of PERMITTING the effective use of dataflow and other parallel-processing machines without extensive modification of programs. Date: Wed 21 Nov 84 16:22:25-EST From: Rodney A. Brooks Subject: Re: left to right order of evaluation I don't think the paragraph on page 61 specifies order of evaluation of argument forms. It is talking about application of a function, not evaluating arguments to which a function will get applied. From the third paragraph on page 61 it seems clear that ``processed'' does not include evaluation of argument forms as it talks about intermingling binding of variables and ``processing''. Glad to find somebody agrees with my interpretation of the manual. The bottom of page 58 and top of 59 seem to be the place to specify order of evaluation and there there is no mention of order of evaluation. There should be something explicit there even if it is to say the order is undefined. I agree completely. Note in particular it says "the forms 4 and 5 are evaluated, producing arguments 4 and 5 for the multiplication", not "the form 4 is evaluated producing 4, and then the form 5 is evaluated producing 5, which are the arguments for the multiplication". Clearly it's ambiguous whether they are done in parallel or in sequence. I think the right thing is left to right evaluation of argument forms and mute on when the symbol-to-function mapping happens. Perhaps for the present, although I would still like to see parallel evaluation of arguments permitted as I described earlier. [An aside, who's the cretin who picked the font for numbers in the aluminum edition? Backwards-E is simply garbage. We're implementing a language for 5*7-matrix and bitmapped displays, not wristwatches using liquid crystal 7-segment displays, huh? It was a long time perusing the manual (on other pages where wasn't obviously a number) before I realized they meant the number 3 not some typographic error.] -------  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 14:07:14 PST Date: 23 November 1984 17:07-EST From: Jonathan A Rees Subject: left to right order of evaluation To: BROOKS @ MIT-OZ cc: common-lisp @ SU-AI In-reply-to: Msg of Wed 21 Nov 84 15:52:49-EST from Rodney A. Brooks Date: Wed 21 Nov 84 15:52:49-EST From: Rodney A. Brooks Is there any rational reason not to insist on left to right evaluation? [People not interested in the answer to this question need read no further.] I'll give you the reasons I can think of, and leave it to you to determine their rationality. 1. There must be, since so many carefully designed languages (e.g. BLISS, Algol 68, Scheme) and others as well (C, Pascal??, Modula??) make a point of defining argument evaluation order to be indeterminate. 2. Defining the order of argument evalutation only encourages users to write programs that make use of a particular order. In the view of certain pedagogues, such dependence is a stylistically horrible thing which should be discouraged by the language design. (In the view of others, of course, it's just fine.) 3. While I agree that a compiler can do a lot in the way of detecting cases where code motion is permissible, indeterminate order permits code optimizations which no amount of compile-time analysis can prove correct. As a simple example, consider (BAR *X* (FOO)) Assume a machine model in which the two arguments are to be put in registers A and B, and the return value from calls to unknown functions comes back in A. Left-to-right code for this would look like MOVE *X*,TEMP CALL 0,FOO MOVE A,B MOVE TEMP,A CALL 2,BAR *X* needs to be saved because the call to FOO, an unknown function, might clobber it. But better code would result if the call to FOO happened first: CALL 0,FOO MOVE A,B MOVE *X*,A CALL 2,BAR A compiler might like to generate code for the arguments which use the largest number of registers (the "most complicated" arguments) first, in order to reduce the amount of data shuffling the object code will have to do. 4. A particular machine architecture and Lisp implementation might prefer to evaluate right-to-left, e.g. so that arguments can be pushed onto a stack properly. I see now that Common Lisp is decidedly (if not definedly) left-to-right, and that's probably the politically right thing. Just for the record, however, I'd like to say that I believe that a portable, indeterminate order language can be made to work.  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 23 Nov 84 03:39:27 PST Received: from IMSSS by Score with Pup; Fri 23 Nov 84 03:38:06-PST Date: 23 Nov 1984 0334-PST From: Rem@IMSSS Subject: (EQ ...) untrue for numbers? To: COMMON-LISP%SU-AI@SCORE Somebody made the claim that (DEFUN TST (Z) (LET ((X Z) (Y Z)) (EQ X Y))) could in MacLISP sometimes return NIL if given a numeric argument, but I forgot who it was. I tried it using PLISP (one-segment full-bibop MacLISP) at SU-AI under the claimed conditions and was unable to get anything except T to return. I have a transcript to show that person. Would the person please identify self? -------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Nov 84 20:59:07 PST Received: ID ; Thu 22 Nov 84 23:57:55-EST Date: Thu, 22 Nov 1984 23:57 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: "George J. Carrette" Cc: common-lisp@SU-AI.ARPA Subject: global function namespace given too much weight perhaps? The justification is that (after some debate), we decided not to try to make the function name space parallel to the variable name space in all respects. These two name spaces do rather different things, and we decided not to put in all the things that would be needed to make the two spaces truly parallel. For example, in addition to the SETQ-equivalent, we left out functional variables in argument lists. Whether this is a reasonable justification I leave up to you, but that's how it happened. The inconsistency, if any, is that we let FLET and LABELS sneak back into the language because a few people thought they would be useful. -- Scott  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 22 Nov 84 15:46:25 PST Received: from Cabernet.MS by ArpaGateway.ms ; 22 NOV 84 15:45:36 PST Date: 22 Nov 84 15:43 PST From: JonL.pa@XEROX.ARPA Subject: Re: left to right order of evaluation In-reply-to: "Scott E. Fahlman" 's message of Wed, 21 Nov 84 16:07 EST To: Fahlman@CMU-CS-C.ARPA cc: BROOKS%MIT-OZ@MIT-MC.ARPA, common-lisp@SU-AI.ARPA Yes, there is a slight misnomer in the discussion -- it is, as you mention in your previous note, relevant only to whether "to do the SYMBOL-FUNCTION of G before computing the args" or not. The question of left-to-right order for argument evaluation wasn't ever, as far as I can recall, seriously questioned. Two items may influence the pertinent point however: 1) Interpreters classically "snoop" at the function position first, regardless of whether they are classical Lisp interpreters, or the "mindless uniformity" style so favored by functional programming types. Thus, without extra forethought, interpreter would probably do the SYMBOL-FUNCTION before doing the args; in fact, without the severe limitation on "special forms" in CL , it would be possible in other dialects to change even the EXPR/FEXPR decision on the "function" after beginning the argument evaluations. 2) As you mention, there is some variability here in compile-time environments; back at the beginning of the VAX/NIL project, we opted for a design that would "open up" a function call frame on the stack before actually evaluating the arguments; the format of the compiled code was to "push" into this opened-up frame, rather than merely "push"ing at the stack tip. This permitted an error for "undefined function" to be signalled before the "arguments" were computed (with potentially many side-effects); it also enabled more debugging information to be obtained from the stack frame during the interval of time of evaluating the arguments. Point 2 was a bit controversial; especially in view of the LispMachine pattern which was simply to push at the stack tip, and *after* all arguments were pushed, then decide what to do about a function call. One area of concern was performance, and the follow should explain why this decision, *for the VAX* was not a performance impact. Since the VAX has an very flexible set of instructions, with plenty of registers to be "frame pointers", this decision caused no slowdown in the compiled code, but rather cost only an extra byte or two per operand pushed. The VAX PUSHL opcode, oddly enough, isn't particularly fast -- Rich Bryan and I once reported this odd observation to a mail list (forget which one, maybe NIL@MC), but others, notably Fateman, had reported the same finding earlier. To no one's surprise, the VAX CALLS instruction is the big loser here and I've heard rumors that the DEC Common Lisp folks have, upon "second system re-design", dropped it altogether in favor of JSB for Lisp function calling. As you may infer, I rather favor the doing of the SYMBOL-FUNCTION first, not only because of the extra debugging information that this way may supply, but also because it tends to lessen the conflict engendered by the "separate function cell" which so antagonizes the functional programming types. However, like you, I would place "performance" at a priority higher than that for this particular matter of uniformity. -- JonL --  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 22 Nov 84 14:49:32 PST Date: 22 November 1984 17:50-EST From: George J. Carrette Subject: global function namespace given too much weight perhaps? To: Fahlman @ CMU-CS-C cc: common-lisp @ SU-AI In-reply-to: Msg of Thu 22 Nov 1984 13:12 EST from Scott E. Fahlman Seems there is a hole here, or at least a gratiutious inconsistency, in that there is a (call it) variable namespace binding construct, LET, and a cooresponding side-effect construction, SETQ, and there is a function namespace binding construct, FLET, but the side-effect construction FSETQ (if you will) is missing. Given that it is missing, can we give a reasonable justification for that, or can we fix the inconsistency?  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 22 Nov 84 10:13:25 PST Received: ID ; Thu 22 Nov 84 13:12:13-EST Date: Thu, 22 Nov 1984 13:12 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: "George J. Carrette" Cc: common-lisp@SU-AI.ARPA Subject: global function namespace given too much weight perhaps? In-reply-to: Msg of 22 Nov 1984 02:06-EST from George J. Carrette No, FUNCTION is not one of the forms for which SETF is supposed to work. SYMBOL-FUNCTION is, however. -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 23:06:01 PST Date: 22 November 1984 02:06-EST From: George J. Carrette Subject: global function namespace given too much weight perhaps? To: Fahlman @ CMU-CS-C cc: common-lisp @ SU-AI In-reply-to: Msg of Wed 21 Nov 1984 15:42 EST from Scott E. Fahlman Please excuse my use of an old name for the concept, (defun fset (a b) (setf (symbol-function a) b)) It was a good thing I didn't use FSETQ, i.e. (defmacro FSETQ (A B) `(SETF #',A ,B)) Which brings up a question, is FSETQ as it is defined here supposed to work in common-lisp?  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 15:24:20 PST Received: from Semillon.ms by ArpaGateway.ms ; 21 NOV 84 15:24:44 PST Date: 21 Nov 84 15:22 PST From: masinter.pa@XEROX.ARPA Subject: Re: left to right order of evaluation In-reply-to: Rodney A. Brooks 's message of Wed, 21 Nov 84 15:52:49 EST To: common-lisp@SU-AI.ARPA I believe that any portable Lisp needs to have a well defined order of evaluation for the whole language, including specifying when the symbol-to-function mapping happens.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 13:24:43 PST Date: Wed 21 Nov 84 16:22:25-EST From: Rodney A. Brooks Subject: Re: left to right order of evaluation To: Fahlman@CMU-CS-C.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: Message from ""Scott E. Fahlman" " of Wed 21 Nov 84 16:11:09-EST I don't think the paragraph on page 61 specifies order of evaluation of argument forms. It is talking about application of a function, not evaluating arguments to which a function will get applied. From the third paragraph on page 61 it seems clear that ``processed'' does not include evaluation of argument forms as it talks about intermingling binding of variables and ``processing''. The bottom of page 58 and top of 59 seem to be the place to specify order of evaluation and there there is no mention of order of evaluation. There should be something explicit there even if it is to say the order is undefined. I think the right thing is left to right evaluation of argument forms and mute on when the symbol-to-function mapping happens. No readable code could rely on either order for the latter. (Gee, wouldn't it be fun to be a lawyer...) -------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 13:09:38 PST Received: ID ; Wed 21 Nov 84 16:07:55-EST Date: Wed, 21 Nov 1984 16:07 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: "Rodney A. Brooks" Cc: common-lisp@SU-AI.ARPA Subject: left to right order of evaluation In-reply-to: Msg of 21 Nov 1984 15:52-EST from Rodney A. Brooks I believe that the first full paragraph on page 61 of the aluminum edition specifies left-to-right evaluation of arguments for all function calls. Or at least, that's the way I read it. I feel certain that it was everyone's intention that arguments in a normal function call should be evaluated left-to-right, but I don't recall any discussion of when to do the symbol-to-function mapping. -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 12:56:20 PST Date: Wed 21 Nov 84 15:52:49-EST From: Rodney A. Brooks Subject: Re: left to right order of evaluation To: JAR@MIT-MC cc: common-lisp@SU-AI.ARPA In-Reply-To: Message from "Jonathan A Rees " of Wed 21 Nov 84 15:27:00-EST I think JAR is correct that there is no explicit mention of left to right order of evaluation in the CL manual. However on page 99 of the aluminum edition there are the following two paragraphs: "In generalized-variable references such as shift, incf, push and setf of ldb, the generalized variables are both read and written in the same reference. Preserving the source program order of evaluation and the number of evaluations is particularly important. As an example of these semantic rules, in the generalized-variable reference (setf reference value) the value must be evaluated @i(after) all the subforms of the reference because the value form appears to the right of them." And then on page 102 in discussing defsetf there is a sentence: "The implementation of defsetf takes care of ensuring that subforms of the reference are evaluated exactly once and in the proper left-to-right order." So if the official position is that order is not defined then the above pieces of the manual are bogus. Is there any rational reason not to insist on left to right evaluation? (Note that LET insists (page 110) on evaluating the value forms which the variables get bound to left to right, so any optimiztion and side effect mechanisms analysis stuff already needs to be built to make let super efficient.) -------  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 12:43:53 PST Received: ID ; Wed 21 Nov 84 15:42:13-EST Date: Wed, 21 Nov 1984 15:42 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: "George J. Carrette" Cc: common-lisp@SU-AI.ARPA Subject: left to right order of evaluation In-reply-to: Msg of 21 Nov 1984 14:11-EST from George J. Carrette If I understand your question (not easy, since you use things like FSET that are not defined in Common Lisp), you are asking whether, in a normal function call like "(g (foo))", the mapping from the symbol G to its definition occurs before or after the call to FOO, which might change the definition of G as a side-effect. I don't think that the manual currently addresses this issue. I would advocate leaving this decision to the implementor, since I know of at least one implementation that would be badly screwed if either decision were mandated. Put another way, I'm proposing that it "is an error" in portable code to change the definition of a symbol while computing the arguments in a normal call to that symbol, and the results will be unpredictable if you do this. I suppose the "right" thing is to do the SYMBOL-FUNCTION of G before computing the args, but specifying this would limit the flexibility available to the implementor in a place where tricks can buy you a LOT. The funcall case is different, I think. In something like "(funcall #'g (foo))", the translation of the symbol G to a function object clearly occurs when (function 'G) is evaluated, and nothing that FOO does to G (short of destructively modifying its definition) can alter that after the fact. -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 12:28:13 PST Date: 21 November 1984 15:27-EST From: Jonathan A Rees Subject: left to right order of evaluation To: GJC @ MIT-MC cc: common-lisp @ SU-AI In-reply-to: Msg of 21 Nov 1984 14:11-EST from George J. Carrette Date: 21 November 1984 14:11-EST From: George J. Carrette ; Is (F) => "left to right is strict" ; or => "left to right is violated" The question is, does common-lisp define the result of (F) in this case? I remember once looking in the CL manual for a definition of argument evaluation order, and not finding any. (My apologies if I am ignorant of some previous discussion of this.) Options: (a) It was left undefined and was meant to be left undefined (perhaps because there was no agreement as to whether or not to define it!). (b) It was defined to be left-to-right, or it was meant to be defined and if the manual doesn't say so it was an oversight. In case (a), I think that, because (probably) all implementations are left-to-right, if any CL implementation actually came along which was not left-to-right, 70% of real CL programs would fail to work, bombing out in very obscure ways. I don't think that users will be able to restrain themselves from depending on evaluation order unless either their compiler generates argument-interaction warnings like RABBIT does, or they are forced to deal with non-left-to-right implementations, or both. In case (b), there's the problem GJC brings up. The consistent thing would be to do the function position before anything else, but implementors will probably complain that that will slow everything down (which is true). Inconsistent things to do would be to compute the function last, or to leave the time at which the function is computed undefined (as in Maclisp - first in interpreted code, last in compiled). Either of these would be pretty bizarre, in my opinion, given that arguments are evaluated left-to-right. But I suspect one or the other such bizarre behavior is what many CL implementations out there exhibit. (Certainly if CL is left-to-right then E1 would have to be evaluated before E2 in (FUNCALL E1 E2); if CL has the first "bizarre" behavior as described in previous paragraph, then a compiler must not rewrite (FUNCALL #'G (I)) as (G (I)) unless it can prove that calling I will not assign #'G.) So we have a problem either way.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 11:19:13 PST Date: 21 November 1984 14:19-EST From: Kent M Pitman Subject: Stability? To: Wholey @ CMU-CS-C cc: Common-Lisp @ SU-AI In-reply-to: Msg of Tue 13 Nov 1984 00:22 EST from Skef Wholey Date: Tue, 13 Nov 1984 00:22 EST From: Skef Wholey It seems that if X is implemented as specified, but X is subject to debate, then the specification of X should not change. Some people are making appeals to taste, but there are other less tasteful features of the language that no one's trying to change. The issue of THROW using EQ or EQL is a discussion of what happens in the undefined case of using a number or character. EQ and EQL are the same over the defined range of args, and whether the words are explicit here or not, the clear implication is one of "is an error", and we have left "is an error" things open to discussion. Should CMU and DEC and Data General and Symbolics and whoever else out there be forced to reimplement something because it's not as tasteful (in someone's opinion -- not everyone's) as could be? No one has ever been forced in these discussions to do a particular thing. The point of putting thoughts on the floor is to find out what consensus is. I actually believe it is critical that we re-examine certain issues in the aluminum issue, with the aim of changing them in later editions, even if such editions don't come soon. Not every decision made in the first edition was the right decision, and we cannot force users to live with all mistakes forever. A major purpose of any further discussions on this list must be to figure out where we've made mistakes and what subset of those it would be possible/appropriate to correct. Sure, the compiler could easily figure out where an EQ-Throw could be used. I'll go and fix our implementation if everyone thinks I should. But that's not my idea of a stable language specification. Why not. No working program will break? Formerly undefined programs will become well-defined. There is nothing even remotely unstable about such a change. All changes should be so stable! Hey, why not have Throw take a keyword argument for the test to use? Or an abitrary predicate that the tag must satisfy? Other keyword arguments would be useful as well... (unless (throw *zinger* 'hi! :test #'equalp :if-does-not-exist nil) (format t "Catch tag ~S was not found.~%" *zinger*)) I actually had what I think was a reasonable application for a :TEST arg recently. If you're being serious, I'll be glad to explain why it would be useful. This, by the way, is also a "stable", upward compatible change. I also like the :IF-DOES-NOT-EXIST key, actually. In Maclisp, I have seen a few applications for (LET ((ERRSET NIL)) (ERRSET (*THROW ...) NIL)), which accomplishes the same end. It saves having to do (DEFVAR *FOO-TAG-AVAILABLE* NIL) (DEFUN F () (*CATCH 'FOO (LET ((*FOO-TAG-AVAILABLE* T)) ...))) (DEFUN G () (IF *FOO-TAG-AVAILABLE* (*THROW 'FOO *SOMETHING*))) which just wastes a lot of verbiage. Its only problem comes in a breakpoint where you might want to bind *FOO-TAG-AVAILABLE* to NIL, but you can't. There would have to also be an accompanying way for breakpoints to hide pending catch tags. -kmp  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 21 Nov 84 11:11:05 PST Date: 21 November 1984 14:11-EST From: George J. Carrette Subject: left to right order of evaluation To: common-lisp @ SU-AI Or, "is funcall really considered harmfull?" Q: ; given: (defun f () (fset 'g #'h) (prog1 (g (i)) (fset 'g #'h))) (defun h (ignored) "left to right is strict") (defun i () (fset 'g #'(lambda (ignored) "left to right is violated"))) ; Is (F) => "left to right is strict" ; or => "left to right is violated" The question is, does common-lisp define the result of (F) in this case? This is related to the question of the equivalence of (G ...) and (FUNCALL (PROGN #'G) ...) where I add the PROGN to indicate that I don't want compiler optimization to determine this. I've observed that on some machines this equivalence is rather designed in, that a call of a function and a FUNCALL are exactly the same instruction sequence, but on some machines they are very different. -gjc  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 20 Nov 84 18:57:50 PST Received: from IMSSS by Score with Pup; Tue 20 Nov 84 18:57:34-PST Date: 19 Nov 1984 2255-PST From: Rem@IMSSS Subject: How to organize portable programs with sys-dep parts? To: COMMON-LISP%SU-AI@SCORE When writing code that will run on many systems, with most of the code portable but a small part different on various systems, there comes the problem of how to organise the various files that contain the various parts of the code and how to link the portable code with the system-dependent code. I'm wrestling with this currently in PSL, and would be interested in hearing how Common LISP implementors did it, as well as ideas from others. My ideas are below ("existing code" refers to my IMSSS softare that is presently being ported to CMS and will soon also be ported to other machines). (1) The way my existing code does it, namely have all versions in the source an all versions compiled, but at runtime a global variable chooses each time which piece of code to execute [advanges: simple; disadvantages: confuses FCHECK; requires global variable at runtime; takes a lot more memory at runtime; takes some more time to execute each time; compilation takes much longer since it has to compile everything for every machine every time] (2) The same except using macros to select which body of code to compile [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: still requires versions of source for all systems around when compiling for any system, thus taking longer to read it in and slightly longer to select correct version to compile; requires the macro to know what the target machine will be, something I'm not sure the cross-compiler maks available to user macros] (3) Keep system-dependent stuff in separate files, using funny names such as SYSDEP-foo as the way the system-dependent stuff is called from the portable files, keep user-callable function in portable source files with link to SYSDEP-foo function, and with comments indicating where the SYSDEP-foo function might be found [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: (a) if functions used as linkage, slightly more CPU time (b) if macros used as linkage, you won't be able to TRace system functions by their usual name at runtime unless the TRace package is smart enough to map (TRace OPEN) into (TRace SYSDEP-OPEN) for example.] (4) Same except have user call the system-dependent functions directly, i.e. they have advertised names rather than funny names via vector from advertised names [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: confusing to maintainers since they have no idea whether a function is in portable files or in some random system dependent file] -------  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 20 Nov 84 18:58:25 PST Received: from IMSSS by Score with Pup; Tue 20 Nov 84 18:58:05-PST Date: 20 Nov 1984 1241-PST From: Rem@IMSSS Subject: How to implement partly-system-dependent mostly-portable code? To: COMMON-LISP%SU-AI@SCORE When writing code that will run on many systems, with most of the code portable but a small part different on various systems, there comes the problem of how to organise the various files that contain the various parts of the code and how to link the portable code with the system-dependent code. I'm wrestling with this currently in PSL, and would be interested in hearing how Common LISP implementors did it, as well as ideas from others. My ideas are below ("existing code" refers to my IMSSS softare that is presently being ported to CMS and will soon also be ported to other machines). (1) The way my existing code does it, namely have all versions in the source an all versions compiled, but at runtime a global variable chooses each time which piece of code to execute [advanges: simple; disadvantages: confuses FCHECK; requires global variable at runtime; takes a lot more memory at runtime; takes some more time to execute each time; compilation takes much longer since it has to compile everything for every machine every time] (2) The same except using macros to select which body of code to compile [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: still requires versions of source for all systems around when compiling for any system, thus taking longer to read it in and slightly longer to select correct version to compile; requires the macro to know what the target machine will be, something I'm not sure the cross-compiler maks available to user macros] (3) Keep system-dependent stuff in separate files, using funny names such as SYSDEP-foo as the way the system-dependent stuff is called from the portable files, keep user-callable function in portable source files with link to SYSDEP-foo function, and with comments indicating where the SYSDEP-foo function might be found [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: (a) if functions used as linkage, slightly more CPU time (b) if macros used as linkage, you won't be able to TRace system functions by their usual name at runtime unless the TRace package is smart enough to map (TRace OPEN) into (TRace SYSDEP-OPEN) for example.] (4) Same except have user call the system-dependent functions directly, i.e. they have advertised names rather than funny names via vector from advertised names [advantages: minimum runtime memory; minimum runtime CPU time; disadvantages: confusing to maintainers since they have no idea whether a function is in portable files or in some random system dependent file] -------  Date: 16 Nov 84 2216 PST From: Martin Frost Subject: message remailed after list corrected To: common-lisp@SU-AI.ARPA Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 16 Nov 84 16:52:16 PST Received: ID ; Fri 16 Nov 84 19:52:23-EST Date: Fri, 16 Nov 1984 19:52 EST Message-ID: From: Rob MacLachlan To: common-lisp@SU-AI.ARPA Subject: Inconsistency in CLM There is an inconsistency in the CLM in the definition of Ceiling, Floor, Truncate and Round in the explanation of the meaning of the second argument. It is not in general true that: (op x y) <==> (op (/ x y)) Consider Truncate with X = 1 and Y = -3/2: (/ 1 -3/2) => -2/3 (truncate -2/3) => 0, -2/3 This result violates the formula describing the relation between the return values and the arguments: (+ (* 0 1) -2/3) => -2/3 ; Not 1! While the first value of the two expressions will be the same, the remainder is often different. Rob  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 13 Nov 84 00:36:11 PST Received: ID ; Tue 13 Nov 84 03:36:01-EST Date: Tue, 13 Nov 1984 03:35 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: masinter.pa@XEROX.ARPA Cc: Common-Lisp@SU-AI.ARPA Subject: Stability? Insofar as a valid Common Lisp expression which is not "it is an error" could work differently and unpredictably in one implementation of the spec and another, then the language isn't portable and "common" is a myth. The language specification is full of explicit concessions to different implementation techniques. "(let ((x 5)) (eq x x))" is a program that could yeild different results in different implementations (page 78). If you want to question the funny nature of EQ, feel free. I think you'll come up against a lot of opposition if you suggest that EQ be made to act like EQL. We've had that discussion before, I believe. But the nature of EQ wasn't being questioned. The choice of the predicate used by Throw for comparing catch tags was. I don't think there's any person living that thinks Common Lisp is the best programming language ever invented. There are many people who think it's a good compromise between many different language design issues. One should be able to write a correct Common Lisp program today and have it run tomorrow. No, the language shouldn't be stagnant, but it should change "only slowly and with due deliberation." My point was that the Throw/EQ issue is a little thing, and there are many other ugly little things in the language, but people have invested time and money implementing them. As I said, if it's agreed that EQL is the best way to compare catch tags, then I'll go along with that decision. But I wanted to inject a little of the language's goal of stability into the discussion. A programming language that changes in subtle ways overnight is far worse than one with subtlties that are carefully documented. --Skef  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 23:47:25 PST Received: from Chardonnay.ms by ArpaGateway.ms ; 12 NOV 84 23:47:50 PST From: masinter.pa@XEROX.ARPA Date: 12 Nov 84 23:47:00 PST Subject: Re: Stability? In-reply-to: WHOLEY@CMU-CS-C.ARPA's message of Tue, 13 Nov 84 00:22 EST, To: WHOLEY@CMU-CS-C.ARPA cc: Common-Lisp@SU-AI.ARPA If Common Lisp is a fundamentalist religion (the Aluminum Edition is the Word of God) and not subject to debate, on grounds of taste, efficiency, or portability, then please remove me from the DL -- any further discussion is senseless. I believe the issue about EQ vs EQL in Catch and Throw tags is *not* an issue of taste, but rather an instance of one of the minor ways in which "Common Lisp" could fail to meet one of its initial goals: being a portable dialect. Insofar as a valid Common Lisp expression which is not "it is an error" could work differently and unpredictably in one implementation of the spec and another, then the language isn't portable and "common" is a myth.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 21:22:49 PST Received: ID ; Tue 13 Nov 84 00:22:36-EST Date: Tue, 13 Nov 1984 00:22 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Common-Lisp@SU-AI.ARPA Subject: Stability? It seems that if X is implemented as specified, but X is subject to debate, then the specification of X should not change. Some people are making appeals to taste, but there are other less tasteful features of the language that no one's trying to change. Should CMU and DEC and Data General and Symbolics and whoever else out there be forced to reimplement something because it's not as tasteful (in someone's opinion -- not everyone's) as could be? Sure, the compiler could easily figure out where an EQ-Throw could be used. I'll go and fix our implementation if everyone thinks I should. But that's not my idea of a stable language specification. Hey, why not have Throw take a keyword argument for the test to use? Or an abitrary predicate that the tag must satisfy? Other keyword arguments would be useful as well... (unless (throw *zinger* 'hi! :test #'equalp :if-does-not-exist nil) (format t "Catch tag ~S was not found.~%" *zinger*)) --Skef  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 20:40:45 PST Date: 12 November 1984 23:24-EST From: Glenn S. Burke Subject: catch/throw performance loss To: common-lisp @ SU-AI The fact that catch/throw don't use EQL has always bothered me as a consistency hole. I think they should. I also don't believe that the tags should be restricted to symbols. To do this in NIL throw will have to check to see if it can do the fast EQ lookup -- this is a fairly trivial check. I also have some problems with doing hairy EQL lookup in THROW, but they have to do with maintaining the pointer to the catch-frame chain during the lookup, because it isn't an explicit data-type, and EQL can call out to the user.  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 13:40:21 PST Received: from Cabernet.MS by ArpaGateway.ms ; 12 NOV 84 13:41:31 PST Date: 12 Nov 84 13:40 PST From: JonL.pa@XEROX.ARPA Subject: EQ vs EQUAL: catch/throw performance loss In-reply-to: "Scott E. Fahlman" 's message of Mon, 12 Nov 84 15:50 EST To: Fahlman@CMU-CS-C.ARPA cc: KMP@MIT-MC.ARPA, common-lisp@SU-AI.ARPA I like your rationalization of this: ". . . Catch tags are supposed to be symbols. We generalized it a bit beyond that, because we could do this at *NO* extra implementation cost and it made some odd cases a bit cleaner . . .". This would permit a random list cell to be a catch tag, but the THROW primitives would not be required to call EQUAL. The call to use EQL would make more sense if it were a call to use EQUAL, because fewer exceptions would have to be remembered, but then the cost factor would come in to play again. An underlying problem, however, is the deeper one of EQ semantics, and I don't feel it is as bad as has been generally portrayed. Namely, contrary to the prevalent proverb, EQ does work on numbers, and a user can in fact so use it. What a user must understand is the more complicated semantics of EQ as opposed to EQUAL -- a burden placed upon him from the very beginnings of LISP 1.5. Here's a simple paraphrase of the problems in Common Lisp which he must understand: (1) two instances of a symbol FOO might not be EQ if they are interned on different packages, or if at least one of them is uninterned. [so what is the identity of a "symbol" then? one would like to define it with EQ, but one cannot ignore that english language communication typically defines it by the characters of its pname, which is why one would would have to say, e.g., "ess-eye-colon-FOO" if the package distinction were important]. (2) two instances of any heap-consed object will not be EQ unless stored at the same address; while the notion of "address" isn't really machine-dependent, the notion of when one has two EQUAL but not EQ objects is very program dependent, and there are very few constraints upon documented Common Lisp functions to preserve EQness. (3) the compiler may, at its discretion, convert a pdl-allocated object into a heap-consed one of the same form (i.e., they must be EQUAL). So you have to know the potential range of pdl-allocation of your particular implementtion. (4) numbers aren't the only objects with EQ problems due to pdl-allocation; although I don't remember if the Common Lisp manul comments on this question, it has been true for a long time that the MIT Lisp Machine descendents do pdl-allocation for the &rest argument to functions, and don't protect against this quantity being pointed to by heap structures. Thus even a program which is itself free of updatings could find itself with two items which are EQ but not EQUAL! [the "two items" here are not two program variables, but the same program variable at two different times]. I suspect that a simple bit of compiler technology is called for here. -- JonL --  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 13:26:36 PST Date: 12 November 1984 16:08-EST From: Jonathan A Rees Subject: Using EQ instead of EQL to compare catch tags To: Wholey @ CMU-CS-C cc: COMMON-LISP @ SU-AI Date: Mon, 12 Nov 1984 13:51 EST From: Skef Wholey Non-symbol, non-numeric catch tags are useful. For example, to correctly implement the contorted example on page 40, one must create a unique catch tag at runtime (that's done quickly and portably with CONS). Not true. General BLOCK and RETURN-FROM do not need to be implemented using CATCH/THROW. (It might actually make sense to do it just the other way around, as is done in a Lisp implementation I won't name.) Fine if an implementation does so, but such implementation detail needn't enter into discussions of language design (or manual revision). I suspect that any use of non-symbol CATCH could just as easily be implemented using BLOCK and lexical closures. Just to add my two cents worth, I tend to think of CATCH as a binding form, and therefore think it should work on the same kinds of things that PROGV works on, but it's probably too late to make this consistent (either by allowing variables named by non-symbols or by forbidding non-symbol CATCH tags). And given that non-symbol tags are allowed, and that there needn't be any performance penalty (as Kent rightly points out), the EQL definition is the right thing. Jonathan  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 12:51:05 PST Received: ID ; Mon 12 Nov 84 15:50:32-EST Date: Mon, 12 Nov 1984 15:50 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Kent M Pitman Cc: common-lisp@SU-AI.ARPA Subject: catch/throw performance loss All that random users need to remember is that Catch tags are supposed to be symbols. We generalized it a bit beyond that, because we could do this at *NO* extra implementation cost and it made some odd cases a bit cleaner to use some fresh-consed object other than a symbol. But rather than generalize to a case that IS going to cost more, I'd retreat to allowing symbols only. -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 12:24:43 PST Date: 12 November 1984 15:25-EST From: Kent M Pitman Subject: catch/throw performance loss To: common-lisp @ SU-AI Of course, there could be two routines -- one optimized for EQ when it works, and one for EQL when needed. Unless THROW sees a number or char, it can use the fast routine, no? In which case, we're not talking performance loss -- just implementation overhead. In response to SKEF's remark about "Given the whole EQ not working on numbers business, I don't think there's more cognitive load on the programmer", I say I think he's confused. The point is that the user (at least, me) is going to expect that implementors are going to take their own advice and use EQL in places where EQ might be dangerous, and this is such a place. It it undue burden on the programmer to have to remember "EQ doesn't work on numbers, but I have to remember that THROW will try to use it anyway, so I should be careful never to let THROW try such a thing because a bug will bring my program down in flames". It's holes in the language like this that drive users crazy.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 10:51:37 PST Received: ID ; Mon 12 Nov 84 13:51:20-EST Date: Mon, 12 Nov 1984 13:51 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Masinter.pa@XEROX.ARPA Cc: COMMON-LISP@SU-AI.ARPA Subject: Using EQ instead of EQL to compare catch tags In-reply-to: Msg of 12 Nov 1984 13:04-EST from Masinter.pa at XEROX.ARPA From: Masinter.pa at XEROX.ARPA To: COMMON-LISP at SU-AI.ARPA It seems clear that the "portable subset" of Common Lisp has to either insist that EQL be used for catch-tags, or else refuse to allow anything other than a symbol. That makes very little sense to me. Non-symbol, non-numeric catch tags are useful. For example, to correctly implement the contorted example on page 40, one must create a unique catch tag at runtime (that's done quickly and portably with CONS). The only place I can imagine numeric catch tags being used is in code generated by the compiler or some program-writing-program. In either case, care can be taken to make sure that the tag that's caught and the tag that's thrown are EQ. There *are* some performance penalties for portability, you know.... Yeah, but Common Lisp tries to give a reasonable amount of flexibility to the implementor. Given the whole EQ not working on numbers business, I don't think there's much more cognitive load on the programmer. It's probably time for people to start realizing that there are real Common Lisp implementations supporting real users Right Now. Take a look at the Stability paragraph on page 3. --Skef  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 12 Nov 84 10:12:12 PST Received: from Semillon.ms by ArpaGateway.ms ; 12 NOV 84 10:13:23 PST Date: 12 Nov 84 10:04 PST From: Masinter.pa@XEROX.ARPA Subject: Re: Using EQ instead of EQL to compare catch tags In-reply-to: Skef Wholey 's message of Sat, 10 Nov 84 20:37 EST To: COMMON-LISP@SU-AI.ARPA It seems clear that the "portable subset" of Common Lisp has to either insist that EQL be used for catch-tags, or else refuse to allow anything other than a symbol. There *are* some performance penalties for portability, you know....  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 10 Nov 84 17:37:56 PST Received: ID ; Sat 10 Nov 84 20:37:58-EST Date: Sat, 10 Nov 1984 20:37 EST Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Kent M Pitman Cc: COMMON-LISP@SU-AI.ARPA Subject: Using EQ instead of EQL to compare catch tags From: Kent M Pitman Is there a reason why EQ was chosen instead of EQL for comparing CATCH tags? It seems to me that signalling errors isn't likely to be so blindingly fast that using EQL instead of EQ would slow it down enough to matter, and it would simplify the definition (top of p140) to use EQL, wouldn't it? This would be an upward compatible change. Spice Lisp (and perhaps other implementations) would have trouble with that change of definition. In our case, the microcode knows only about fixnums and short floats (immediate numbers for us), and escapes to macrocode routines for arithmetic (and comparison) operations involving other sorts of numbers. Throw is implemented in microcode on the Perq, so if the Throw instruction was given a bignum tag and ran into a bignum tag on the stack, it would have to escape to macrocode to do the comparison. But our macrocode escape machanism is designed to work only on instruction boundaries, so there wouldn't be any clean way to get back into microcode to continue the throw. I could certainly add some kludge to the Perq microcode to deal with the EQL test, but since there are other imlementations out there with tense EQ'ing loops in microcode, assembly code, Bliss, or whatever, I don't think the change is worth the trouble. Also, there would be some loss of performance on stock hardware by putting the EQL into the inner loop of throw. --Skef  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 10 Nov 84 17:30:33 PST Received: ID ; Sat 10 Nov 84 20:30:52-EST Date: Sat, 10 Nov 1984 20:30 EST Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Kent M Pitman Cc: COMMON-LISP@SU-AI.ARPA Subject: Using EQ instead of EQL to compare catch tags In-reply-to: Msg of 10 Nov 1984 18:29-EST from Kent M Pitman Catch/Throw is not just for signalling errors. It is used quite a bit in our interpreter, and therefore needs to be moderately fast. EQL is considerably slower than EQ on stock hardware, and this speed advantage, though not terribly important, seemed to be more important than letting people use numbers as catch-tags. -- Scott  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 10 Nov 84 16:56:48 PST Date: Sat 10 Nov 84 18:29-EST From: Kent M Pitman Subject: Using EQ instead of EQL to compare catch tags To: COMMON-LISP@SU-AI Is there a reason why EQ was chosen instead of EQL for comparing CATCH tags? It seems to me that signalling errors isn't likely to be so blindingly fast that using EQL instead of EQ would slow it down enough to matter, and it would simplify the definition (top of p140) to use EQL, wouldn't it? This would be an upward compatible change. -kmp  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 4 Nov 84 04:28:29 PST Date: 4 November 1984 07:30-EST From: Glenn S. Burke Subject: cruftier contagion corollaries cause consternation To: common-lisp @ SU-AI, mlb%SPA-NIMBUS @ SCRC-STONY-BROOK cc: Guy.Steele @ CMU-CS-A, rwg%SPA-NIMBUS @ SCRC-STONY-BROOK Date: Sat, 3 Nov 84 07:44 PST From: Bill Gosper . . . A much cruftier corollary of floating contagion shows up in min and max. It is a contradiction to say "max returns the argument that is greatest ..." and then in the next sentence say "the implementation is free to produce ... [some rational's] floating point approximation". Such an approximation is generally not any of the arguments, or worse, it may even by = to an argument that was not the largest! Yum, it can be even worse with things like multiple-argument =. There can be two rationals R1 and R2 which are different but both = to some float F. So, (= r1 r2 f) would be true if it was done as (= (float r1) (float r2) f), i.e. do contagious coercion of all args before doing anything else, but not if it was done as (and (= r1 r2) (= r2 f)). What would your users think if (= r1 r2 f) was T in when run in NIL compiled and in Spice interpreted, and NIL when run in NIL interpreted and in Spice compiled?  Received: from SCRC-RIVERSIDE.ARPA by SU-AI.ARPA with TCP; 3 Nov 84 07:50:40 PST Received: from SPA-RUSSIAN by SCRC-RIVERSIDE via CHAOS with CHAOS-MAIL id 25744; Sat 3-Nov-84 10:51:10-EST Date: Sat, 3 Nov 84 07:44 PST From: Bill Gosper Subject: SIGNUM and brain damage To: Guy.Steele@CMU-CS-A.ARPA Cc: common-lisp@SU-AI.ARPA, mlb%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA In-reply-to: The message of 21 Oct 84 21:42-PDT from Guy.Steele at CMU-CS-A Date: 22 Oct 84 0042 EDT (Monday) From: Guy.Steele@CMU-CS-A.ARPA In-Reply-To: "rwg%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA's message of 18 Oct 84 00:49-EST" I'm probably guilty here, if guilt there be. Floating-point contagion may have been the disease, but remember also that Common Lisp adopted the APL extension to SIGNUM, namely that SIGNUM of a complex number returns that point on the unit circle on the ray from the origin to the argument (or zero if the argument is zero). That lent more impetus to following the usual contagion rules (both floating and complex). It also allows the following fairly elegant definition: (defun signum (x) (if (zerop x) x (/ x (abs x)))) --Guy Well, I hardly think invoking divide is elegant, and the complex numbers are broken without complex infinity, which is phaseless, and must therefore constitute a fourth value of signum, but that's another story. A much cruftier corollary of floating contagion shows up in min and max. It is a contradiction to say "max returns the argument that is greatest ..." and then in the next sentence say "the implementation is free to produce ... [some rational's] floating point approximation". Such an approximation is generally not any of the arguments, or worse, it may even by = to an argument that was not the largest!  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 26 Oct 84 18:36:09 PDT Received: from SPA-RUSSIAN by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 115582; Fri 26-Oct-84 02:46:33-EDT Date: Thu, 25 Oct 84 23:47 PDT From: Bill Gosper Subject: SIGNUM in integer form To: Guy.Steele@CMU-CS-A.ARPA Cc: common-lisp@SU-AI.ARPA In-reply-to: The message of 21 Oct 84 21:47-PDT from Guy.Steele@CMU-CS-A.ARPANET Date: 22 Oct 84 0047 EDT (Monday) From: Guy.Steele@CMU-CS-A.ARPA Maybe what you want is the third value from INTEGER-DECODE-FLOAT? (Not the most elegant thing, I admit.) --Guy No, for 1.5 reasons: 1) It doesn't return 0 for 0. 1.5) It isn't documented what happens for integer arg. Semi-apropos those pages, I really wish there were SCALE-BY-POWER-OF-2, which various implementations could do much faster than multiplication, for almost any representation of any kind of number. SCALE-FLOAT can't do this because of hexadecimal affirmative action, and having the wrong name.  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 26 Oct 84 17:29:12 PDT Received: from hplabs by csnet-relay.csnet id am00718; 26 Oct 84 20:11 EDT Received: by HP-VENUS id AA26468; Fri, 26 Oct 84 12:57:48 pdt Message-Id: <8410261957.AA26468@HP-VENUS> Date: 26 Oct 1984 1257-PDT From: AS%hplabs.csnet@csnet-relay.arpa Subject: EVAL-WHEN: question about intent To: Common-Lisp%su-ai.arpa@csnet-relay.arpa Source-Info: From (or Sender) name not authenticated. Despite Guy Steele's improvements, I believe the definition of EVAL-WHEN in the published definition is still fuzzy and the intent is not clear. The purpose of this message is to outline the nature of this fuzziness and request opinions from designers and users regarding use of EVAL-WHEN. The issue raised by this message is distinct from the issue I raised in a recent message (to which I have not seen any replies). -------------------------------------------------------------------------------- There are two different, but related characteristics that one might want to control using EVAL-WHEN. Characteristic 1: Direct Evaluation vs. Making a Compiled File A. Making a "compiled" file involves processing a collection of forms and producing a file that later can be loaded to achieve the effect of evaluating those forms. In this case, there are two clearly distinct "times" (when the file is being made and when it is being loaded) and potentially two distinct environments. Note that making a "compiled file" need not involve the generation of "machine code", although it is presumed to involve "reading the file" (and invoking readmacros) and "expanding macros", two activities where the Common Lisp manual explicitly mentions that EVAL-WHEN may be required. B. The alternative is "direct evaluation" (for side-effect and result). As stated on page 321, direct evaluation can be performed by an interpreter, or by compiling the form into some other representation (such as "machine code") and then interpreting this representation (perhaps by "hardware"). In this case, there is only one "time" as far as the user is concerned, and only one evironment; one may have the choice of either "doing it" or "not doing it", but there is no choice as to "where it is done", and in no case should "it be done twice". Characteristic 2: Compilation vs. Interpretation A. In this sense, "compilation" presumably refers to generating "machine code", although this notion may be meaningless or ambiguous in any particular implementation. One could argue that even an interpreter that uses a preprocessor should be considered to be a "compiler", since it transforms the program before executing it. B. The alternative, "interpretation", presumably refers to a technique for evaluation that is not a "compiler", as described above. Some implementations may not offer this alternative. In my view, Characteristic 1 is well defined and useful, but Characteristic 2 is fuzzy to the point of being useless, especially in light of the variety of possible implementation techniques for evaluation (explicitly permitted on page 321). Unfortunately, the definition of EVAL-WHEN refers to both characteristics. The "COMPILE" and "LOAD" situations for EVAL-WHEN are clearly defined in terms of compiling a file (Characteristic 1). However, the definition of the "EVAL" situation refers to interpreting versus compilation (which sounds like Characteristic 2). I do not know what the motivation is for attempting to use EVAL-WHEN to control what happens during direct evaluation. PSL does not provide such a feature: it assumes that regardless of what is desired when making or loading a compiled file, one always wants to "do it" (once) when directly evaluating. I would prefer to omit this "EVAL" situation entirely from Common Lisp. However, it is there, and presumably there are reasons for it. What are they? Where would you use this control? Which characteristic (as described above) are you attempting to select for when you use EVAL-WHEN? We must decide which one it is (it can't be both). -------------------------------------------------------------------------------- Alan Snyder hplabs!snyder -------  Received: from DEC-MARLBORO.ARPA by SU-AI.ARPA with TCP; 26 Oct 84 10:46:14 PDT Date: Fri 14 Sep 84 14:56:44-EDT From: GREEK@DEC-MARLBORO.ARPA Subject: Daylight Saving Time To: common-lisp@SU-AI.ARPA I hate to beat a horse which everyone wishes were dead, but: What should DECODE-UNIVERSAL-TIME do about daylight saving time? Clearly, ENCODE-UNIVERSAL-TIME takes it into accout, if appropriate, when normalizing to GMT. When decoding, however, who knows if the original time was on daylight saving time? And if decoding into a different time zone, who knows if that zone is on daylight saving time? At the very least, DECODE-UNIVERSAL-TIME has no business returning daylight-saving-time-p. In VAX LISP, we take daylight saving time in account when decoding into the local time zone, and in no other case. This means that if a local time is encoded and then decoded in California, it's off by one hour. I think our algorithm is bogus, but I'm not sure what to do. - Paul -------  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 22 Oct 84 14:40:11 PDT Received: from hplabs by csnet-relay.csnet id al00222; 22 Oct 84 17:29 EDT Received: by HP-VENUS id AA24895; Mon, 22 Oct 84 11:15:18 pdt Message-Id: <8410221815.AA24895@HP-VENUS> Date: 22 Oct 1984 1115-PDT From: AS%hplabs.csnet@csnet-relay.arpa Subject: Should (EVAL-WHEN (EVAL LOAD) mumble) = mumble? To: Common-Lisp%su-ai.arpa@csnet-relay.arpa Source-Info: From (or Sender) name not authenticated. As far as I can tell from reading the definition of EVAL-WHEN, the form (EVAL-WHEN (EVAL LOAD) mumble) is always equivalent to an unadorned mumble. Specifically, if the interpreter sees this EVAL-WHEN, it will evaluate mumble (since EVAL is specified); if the compiler sees this EVAL-WHEN, it will process mumble in the same mode as the EVAL-WHEN form (either NOT-COMPILE-TIME or COMPILE-TIME-TOO). In either case, the interpreter and compiler process mumble just as they would if mumble appeared by itself, without being wrapped in EVAL-WHEN. Does anyone disagree with this interpretation of the definition? If this interpretation of the definition is correct, then I would argue that the definition is not the right one. PSL has a similar notion called LOADTIME, but its effect is different. Specifically, (LOADTIME mumble) instructs the compiler as follows: (1) If mumble would normally be evaluated at compiletime, then do not perform that evaluation; (2) generate code to evaluate mumble at load-time, even if such code would not normally be generated. In Common Lisp terms, assuming this macro definition, (DEFMACRO FOO () `(EVAL-WHEN (EVAL COMPILE) (*FOO))) the form (EVAL-WHEN (EVAL LOAD) (FOO)) would NOT invoke *FOO at compile time, and WOULD generate code to invoke *FOO at load time. (In the PSL example I actually encountered, FOO was not a macro, but a function flagged as implicitly EVAL-COMPILE.) Alan Snyder hplabs!snyder -------  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 21 Oct 84 21:51:52 PDT Date: 22 Oct 84 0042 EDT (Monday) From: Guy.Steele@CMU-CS-A.ARPA To: rwg%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA Subject: SIGNUM and brain damage CC: common-lisp@SU-AI.ARPA In-Reply-To: "rwg%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA's message of 18 Oct 84 00:49-EST" I'm probably guilty here, if guilt there be. Floating-point contagion may have been the disease, but remember also that Common Lisp adopted the APL extension to SIGNUM, namely that SIGNUM of a complex number returns that point on the unit circle on the ray from the origin to the argument (or zero if the argument is zero). That lent more impetus to following the usual contagion rules (both floating and complex). It also allows the following fairly elegant definition: (defun signum (x) (if (zerop x) x (/ x (abs x)))) --Guy  Received: from CMU-CS-A.ARPA by SU-AI.ARPA with TCP; 21 Oct 84 21:52:02 PDT Date: 22 Oct 84 0047 EDT (Monday) From: Guy.Steele@CMU-CS-A.ARPA To: Bill Gosper Subject: SIGNUM in integer form CC: common-lisp@SU-AI.ARPA In-Reply-To: "Bill Gosper's message of 18 Oct 84 03:02-EST" Maybe what you want is the third value from INTEGER-DECODE-FLOAT? (Not the most elegant thing, I admit.) --Guy  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 18 Oct 84 01:04:21 PDT Received: from SPA-RUSSIAN by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 110817; Thu 18-Oct-84 04:05:56-EDT Date: Thu, 18 Oct 84 01:02 PDT From: Bill Gosper Subject: Why dosen't SIGNUM preserve a few exponent and fraction bits while it's at it? To: DCP@SCRC-STONY-BROOK.ARPA Cc: BUG-LISPM@SCRC-STONY-BROOK.ARPA, lisp-designers%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA, ddyer%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA, cwr%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA In-reply-to: <841017111351.8.DCP@NEPONSET.SCRC.Symbolics> Date: Wednesday, 17 October 1984, 11:13-EDT From: David C. Plummer Date: Wed, 17 Oct 84 02:47 PDT From: Bill Gosper In Symbolics 3600 System 242.355, Mailer 41.3, Print 34.3, Macsyma 23.103, microcode TMC5-MIC 297, on Russian: For what my opinion is worth, it was massively wrong for SIGNUM to try to preserve the floatingness of its argument. The whole purpose of the function is to reduce the information content of its arg to the minimum. Can anyone recall the rationale? Was it any better than the "contagious floating" creeping braindamage? When you get an exact answer, it is idiotic to cast it into the one grubby datatype that fosters inexact computation. Maybe what you're trying to say is "this SIGNUM was of a datum so wretched that even its sign was probably wrong"? What does this have to do with SIGNUM? The bug you SHOWING is that %LEXPR-[AREF/ASET/ALOC] don't type check their indices. >>Error: Page fault on unallocated VMA 206060064 While in the function SYS:%LEXPR-AREF  AREF  TRIANGLE-WINDING-OF-ORIGIN SYS:%LEXPR-AREF: (P.C. = 271) Arg 0 (ARRAY): # Arg 1 (INDICES): (2.0 2.0 2.0) Local 2 (NDIMS): 3 Local 3 (DATA-POINTER): # Local 4 (LINEAR-INDEX): 26.0 Local 5 (TYPE): 5 Local 6 (BITS-PER-ELEM): NIL Local 7 (ELEMS-PER-Q): 1 Sorry to have been so garbled. I guess I expected everyone to figure out that I subscripted by the 1+ of SIGNUMs. The error was especially traumatic since we had just swapped the DP board in an apparently successful attempt to fix a longstanding problem with EGC and FULL-GC, and I had just run them! Halloween came early this year. PS: If it didn't cost anything to check and FIX the subscripts, floating integer subscripts should work! Actually, there shouldn't be floating integers! CommonSlip's "rule of rational canonicalization" should convert them (if possible), to fixnums! -0.0 be damned! My keyboard is melting!  Received: from SCRC-STONY-BROOK.ARPA by SU-AI.ARPA with TCP; 17 Oct 84 22:51:19 PDT Received: from SPA-RUSSIAN by SCRC-STONY-BROOK via CHAOS with CHAOS-MAIL id 110774; Thu 18-Oct-84 01:52:59-EDT Date: Wed, 17 Oct 84 22:49 PDT From: rwg%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA Sender: rwg%SPA-NIMBUS@SCRC-STONY-BROOK.ARPA Subject: Why dosen't SIGNUM preserve a few exponent and fraction bits while it's at it? To: common-lisp@SU-AI.ARPA Supersedes: The (misaddressed) message of 17 Oct 84 02:47-PDT from Bill Gosper (cc:bug-lispm@scrc,lisp-designers@scrc) In Symbolics 3600 System 242.355, Mailer 41.3, Print 34.3, Macsyma 23.103, microcode TMC5-MIC 297, on Russian: For what my opinion is worth, it was massively wrong for SIGNUM to try to preserve the floatingness of its argument. The whole purpose of the function is to reduce the information content of its arg to the minimum. Can anyone recall the rationale? Was it any better than the "contagious floating" creeping braindamage? When you get an exact answer, it is idiotic to cast it into the one grubby datatype that fosters inexact computation. Maybe what you're trying to say is "this SIGNUM was of a datum so wretched that even its sign was probably wrong"? >>Error: Page fault on unallocated VMA 206060064 While in the function SYS:%LEXPR-AREF  AREF  TRIANGLE-WINDING-OF-ORIGIN SYS:%LEXPR-AREF: (P.C. = 271) Arg 0 (ARRAY): # Arg 1 (INDICES): (2.0 2.0 2.0) Local 2 (NDIMS): 3 Local 3 (DATA-POINTER): # Local 4 (LINEAR-INDEX): 26.0 Local 5 (TYPE): 5 Local 6 (BITS-PER-ELEM): NIL Local 7 (ELEMS-PER-Q): 1 AREF: (P.C. = 10) Arg 0 (ARRAY): # Rest arg (SUBSCRIPTS): (2.0 2.0 2.0) TRIANGLE-WINDING-OF-ORIGIN: (P.C. = 42) Arg 0 (X0): -1 Arg 1 (Y0): -0.9999 Arg 2 (X1): 0 Arg 3 (Y1): -0.9999 Arg 4 (X2): 1 Arg 5 (Y2): 1.0001 TRIANGLE-WINDING-OF-ORIGIN: (encapsulated for TRACE) Rest arg (ARGLIST): (-1 -0.9999 0 -0.9999 1 1.0001) POLYGON-WINDING-NUMBER: (P.C. = 42) Arg 0 (PX): 1 Arg 1 (PY): 0.9999 Arg 2 (FIRST-X): 0 Arg 3 (FIRST-Y): 0 Rest arg (COORDS): (1 0 2 2 0 1) SI:*EVAL: (P.C. = 370) Arg 0 (FORM): (POLYGON-WINDING-NUMBER 1 0.9999 0 0 1 0 2 2 0 ...) SI:LISP-COMMAND-LOOP-INTERNAL: (P.C. = 200) Rest arg: (:NAME "Lisp Top Level in Lisp Listener 1" :ABORTED-FUNCTION NIL :BEFORE-PROMPT-FUNCTION NIL :READ-FUNCTION NIL :EVAL-FUNCTION NIL ...) SI:LISP-COMMAND-LOOP: (P.C. = 115) Arg 0 (STREAM): # Rest arg: (:NAME "Lisp Top Level in Lisp Listener 1") Rest of stack: SI:LISP-TOP-LEVEL1: (P.C. = 22) SI:LISP-TOP-LEVEL: (P.C. = 7)  Received: from SCRC-QUABBIN.ARPA by SU-AI.ARPA with TCP; 17 Oct 84 12:06:40 PDT Received: from SCRC-EUPHRATES by SCRC-QUABBIN via CHAOS with CHAOS-MAIL id 92185; Wed 17-Oct-84 15:03:59-EDT Date: Wed, 17 Oct 84 15:04 EDT From: "David A. Moon" Subject: Re: Questions about OPEN To: Charles Hedrick Cc: RAM@CMU-CS-C.ARPA, common-lisp@SU-AI.ARPA, heller%umass-cs.csnet@CSNET-RELAY.ARPA In-reply-to: The message of 5 Oct 84 13:09-EDT from Charles Hedrick Date: 5 Oct 84 13:09:40 EDT From: Charles Hedrick On the DEC-20 it would certainly be better to maintain the distinction between filename and stream. A common strategy for an editor is to write a new copy of a file with a temporary file name, close it, and then rename it on top of the original. This minimizes the probability of damage in a crash. One would prefer to use the same JFN for this entire process. A JFN is a small integer that is a handle on a file. If you keep the JFN active, you are sure that nobody else is going to rename the file out from under you or do anything else nefarious. This is easy if all of the operations can be done on streams. My feeling is that the distinction makes sense on some OS's, and on others causes no harm (other than extra code), so it should be kept. ------- I think the point here is that you don't write such an editor in Common Lisp by generating a temporary file name yourself (how do you make the file name syntax for such temporary files portable across implementations with different maximum file name lengths and different legal characters to appear in file names?), calling OPEN, calling CLOSE, and calling RENAME-FILE. Instead, you call OPEN in :IF-EXISTS :SUPERSEDE mode, and the implementation takes care of doing whatever combination of OPEN's, CLOSE's, and RENAME's is appropriate for the particular file system. Your editor might want to check (eq (pathname-version pathname) ':newest), which is an implementation-independent operation, before deciding whether to use supersede mode. In Unix, where there are no version numbers, that EQ test will always be false, you will always use supersede mode, and everything will be fine.  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 14 Oct 84 08:22:22 PDT Received: ID ; Sun 14 Oct 84 11:23:12-EDT Date: Sun, 14 Oct 1984 11:23 EDT Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: Skef Wholey Cc: Common-Lisp@SU-AI.ARPA Subject: Inconsistency in Aluminum edition In-reply-to: Msg of 14 Oct 1984 11:00-EDT from Skef Wholey Certainly PUSHNEW and ADJOIN behave the same way with respect to the :KEY argument, namely the way ADJOIN does this. The manual should be tweaked to make this absolutely clear. -- Scott  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 14 Oct 84 07:59:58 PDT Received: ID ; Sun 14 Oct 84 11:00:47-EDT Date: Sun, 14 Oct 1984 11:00 EDT Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Common-Lisp@SU-AI.ARPA Subject: Inconsistency in Aluminum edition The text describing PUSHNEW (p. 270) and that describing ADJOIN (p. 276) are apparently in contradiction. "The keyword arguments to PUSHNEW follow the conventions for generic sequence functions. See chapter 14. In effect, these keywords are simply passed on to the ADJOIN function." "ADJOIN deviates from the usual rules described in chapter 14 for the treatment of the arguments named ITEM and KEY. If a KEY function is specified, it is applied to ITEM as well as to each element of the list." Which should one believe? --Skef  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 21:42:31 PDT Received: from IMSSS by Score with Pup; Thu 11 Oct 84 21:41:25-PDT Date: 11 Oct 1984 2142-PDT From: Rem@IMSSS Subject: Whether interpreted&compiled code should agree initially? To: COMMON-LISP%SU-AI@SCORE Date: 11 Oct 84 20:01 PDT From: JonL.pa@XEROX.ARPA I certainly don't want to take on the task of deciding how far to standardize the solidarity between interpreted code and compiled code. This reminds me. The old approach was that some "programs" would run differently interpreted from compiled, mostly due to interpreted variables defaulting to special while compiled variables defaulting to local, but such programs with undeclared free variables weren't regarded as "correct" LISP programs even though they ran ok interpreted. Software existed (for example FCHECK for USLISP and PSL) to look for undeclared free variables and other problems, so the programmer could repair the code to be "correct", so that it would run the same interpreted and compiled (more likely so it would compile at all). A program wasn't considered finished until FCHECK or whatever failed to find any source errors. Common LISP has taken a different tack. At great pain interpreted code is made to act like compiled code, so that trouble is found during interpreted test runs rather than later during FCHECKing or attempts at compiling. Does anybody have strong reason why this method is better? (I always thought the purpose of a LISP interpretor was to be able to hack together something which while not correct did run enough that I could debug the various parts of it independently rather than have everything break if one declaration early in the file was missing. Then after getting it to basically work you start tuning it to run more efficiently, i.e. compiled at all, and maybe with some variables used in tight loops declared as FIXNUMs or BOOLEAN or (ARRAY ) etc. Doesn't Common LISP sort of go against the hacking philosophy?) Why not allow interpreted and compiled code to have different semantics for initial versions of programs that aren't yet correct, but provide software for checking the correctness of programs so they can be made to run the same both ways?? -------  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 21:22:22 PDT Received: from IMSSS by Score with Pup; Thu 11 Oct 84 21:21:23-PDT Date: 11 Oct 1984 2123-PDT From: Rem@IMSSS Subject: Fast or slow linkage re TRACE, what to put in specs? To: COMMON-LISP%SU-AI@SCORE (From a longstanding LISP user: SU-1.6, UCI-LISP, MacLISP , USLISP, PSL; but newcomer to Common LISP. Regarding what can be traced and what can be open-coded or otherwise modified so it can't be traced.) Date: Thu, 11 Oct 1984 22:47 EDT From: Skef Wholey Is it alright for the writer of a "module" to [use fast linkage when a function in the module calls another in the same module]? Why should the random user who wants to trace Print have any reason to believe that the writer of the code supplied to him used that function? Sounds like information-hiding to me. Last I checked, that was suppsoed to be a good thing. I agree. When the a particular function-call within to a module is not a documented one (thus the user doesn't have any reason to expect a particular kind of linkage nor even the function call at all), and when using fast (untraceable) linkage makes the code run faster or take up less space, and when full (traceable) linkage is no longer needed for debugging because this module is fully tested and being distributed to customers, it's completely reasonable (fair, permissible) to compile that function-call using fast linkage. If somebody later wants to debug that module (a previously undiscovered bug crops up, or modifications are desired), the module can be recompiled using slow linkage during the debugging period, and then restored to fast linkage when it's back in production form. Partition the set of Common Lisp functions into three classes: 1. Those that may be open-coded. 2. Those that may either be open-coded or left alone. 3. Those that should never be open-coded. I agree providing you're referring only to calls from user code. Calls within system code may be fast-coded as pre my previous paragraph. The user should have reason to expect calls from that user's code to a certain class of functions (documented in class 3 above) will always be slow-coded (traceable) when called from the user's own code. The user should also be provided a facility to selectively modify the behaviour of the compiler when desired so that calls from the user's newly-compiled code to certain other specifically named functions will also be slow-coded. Information-hiding is a good thing, but a better thing about Lisp is that it has usually provided a way around it, so that people can get work done. I've used the Lisp Machine Inspector to "unlock" a package to hack some symbols in that package. I didn't need to read any documentation or source code. This is a good alternative to tracing when you don't have the source for somebody else's module, or you don't have time or desire to recompile it, or it works fine with slow linkage but there's a bug in open coding, etc. It may be too soon in the state of the art to specify every Common-LISP implementation should have an inspector, but such facility should perhaps be described and encouraged in the manual. There's no real reason to treat compiled code as a black box and TRACE as the only tool for finding things out about the contents of that box. Contrary to what Hedrick says, this IS Lisp. But TRACE is a good first measure to handle 90% of the problems. Beyond simple TRACE, a PAUSE-TRACE should be available which prints the arguments like TRACE, but then asks if the user wants (after seeing arguments) to go into a BREAK (read-eval-print) loop before executing the called function; then after printing the result asks again if the user wants to go into a BREAK before returning. I think Common LISP ought to have TRACE with break-before and break-after more than it should have half the random cruft currently in the specs. Date: Thu, 11 Oct 1984 23:04 EDT From: "Scott E. Fahlman" Some implementations are going to use all sorts of tricks to get full efficiency, and some of these may interfere with tracing. I think that the right attitude is that interpreted code definitely should be traceable and anything else you get is gravy. I agree. Calls from or to interpreted code always should be traceable (but I don't think anybody has an implementation where this is violated except where an FEXPR or MACRO is being called and the trace package can't handle it, so this isn't the disputed point). We need to think about how much gravy, since it truly is useful to trace compiled calls from code that is compiled either because it's too slow to debug interpreted or we thought it worked and now we see it really doesn't. It's a royal pain to have to retreat to interpreted code just because we want to trace a call. Therefore we really need some "gravy". The question is how much and how to document it. So it seems to me that each implementation should do the best it can on this, and should document which things you can trace and which you can't. Do we need to say anything more about this? Maybe. Do we make it total caveat emptor, with each implementation required only to document what it has done, or do we put some constraints on implementation to force a minimal amount of "gravy" across all systems. Since debugging is the essence of LISP, and all customers of barebones Common LISP will expect the ability to debug software they write, I don't think total caveat emptor is the way to go. -------  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 20:26:56 PDT Received: from Chardonnay.ms by ArpaGateway.ms ; 11 OCT 84 20:26:55 PDT Date: 11 Oct 84 20:19 PDT From: JonL.pa@XEROX.ARPA Subject: Re: A few more items on "Questions about specification and possible implementations" In-reply-to: Skef Wholey 's message of Thu, 11 Oct 84 22:47 EDT To: Wholey@CMU-CS-C.ARPA cc: KMP@MIT-MC.ARPA, Common-Lisp@SU-AI.ARPA, PERDUE%hplabs.csnet@CSNET-RELAY.ARPA Oh, I forgot about this one ". . . to avoid consing a rest argument" At the time that &rest arguments got standardized into Common Lisp, the VAX/NIL was implementing them as vectors instead of lists. *** Thus there was no consing for &rest args *** and it's precisely the adoption of a list versus a vector format that shafts a stock hardware implementation (assuming stock hardware precludes cdr coding etc) to the favor of the MIT Lisp Machine and it's descendents. And "we could use the Vax SCANC instruction" While having a string-scanning opcode is "nice" for a machine, that doesn't necessarily mean that a function call to a function utilizing that opcode is a burdensome overhead. The "win" of a single opcode for things like that is the low loop overhead (unless your implementation simply cannot do any function call without incurring gross overhead). A "CAR" opcode, however, exhibits no "loop overhead". The goal of PDP10 MacLisp was "speed at all costs"; to some degree, the goal of Interlisp has been "programming aids at all costs". I'll be the first to admit that the perceived value of "speed at all costs" should be diminished by 6 orders of magnitude compared to what it was 20 years ago when the PDP10 architecture was being developed. While TRACE may not be the standard for "programming aids at all costs", still it is useful not to have the compiler so effectively "hide" one's programs that others cannot gain a good model of them from the source code. -- JonL --  Received: from XEROX.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 20:10:16 PDT Received: from Chardonnay.ms by ArpaGateway.ms ; 11 OCT 84 20:10:36 PDT Date: 11 Oct 84 20:01 PDT From: JonL.pa@XEROX.ARPA Subject: Re: Questions about specification and possible implementations In-reply-to: Skef Wholey 's message of Thu, 11 Oct 84 22:47 EDT To: Wholey@CMU-CS-C.ARPA cc: KMP@MIT-MC.ARPA, Common-Lisp@SU-AI.ARPA, PERDUE%hplabs.csnet@CSNET-RELAY.ARPA I hope your questions are rhetorical, and not directed to me -- I certainly don't want to take on the task of deciding how far to standardize the solidarity between interpreted code and compiled code. Pitman has had some interesting comments about variations at that level in Pdp10 MacLisp -- care to comment KMP? -- JonL --  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 20:07:56 PDT Received: ID ; Thu 11 Oct 84 23:04:56-EDT Date: Thu, 11 Oct 1984 23:04 EDT Message-ID: Sender: FAHLMAN@CMU-CS-C.ARPA From: "Scott E. Fahlman" To: JonL.pa@XEROX.ARPA Cc: Common-Lisp@SU-AI.ARPA Subject: Questions about specification and possible implementations One of the reasons that TRACE is only semi-documented (you ought to have it and you should call it TRACE if you do) is that it doesn't come up in portable code. It's clear that you want TRACE to be as powerful as possible -- if there are things the user might want to see that he can't see, that makes debugging a bit harder. It's also clear that implementations are going to vary in the degree to which they allow you to trace calls from compiled code. Some implementations are going to use all sorts of tricks to get full efficiency, and some of these may interfere with tracing. I think that the right attitude is that interpreted code definitely should be traceable and anything else you get is gravy. So it seems to me that each implementation should do the best it can on this, and should document which things you can trace and which you can't. Do we need to say anything more about this? -- Scott  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 19:50:57 PDT Received: ID ; Thu 11 Oct 84 22:47:57-EDT Date: Thu, 11 Oct 1984 22:47 EDT Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: JonL.pa%Xerox@CMU-CS-C.ARPA Cc: Common-Lisp@SU-AI.ARPA, PERDUE%hplabs.csnet%Csnet-Relay@CMU-CS-C.ARPA Subject: Questions about specification and possible implementations In-reply-to: Msg of 11 Oct 1984 20:54-EDT from JonL.pa%xerox.arpa at csnet-relay.arpa You say that it's OK to open code some functions, like CAR and SVREF. You say that it's not OK to do something like code a call to Write-Line with :start and :end arguments as a call to some internal Keywordless-Write-Line to avoid consing a rest argument. Is it OK to open-code a function some times (e.g. when we know that the sequence argument to Position is a simple-string and we know that the item being searched for is a string-char, we could use the Vax SCANC instruction), but not others? Is that "too complicated a function"? Realize that no one is going to use those generic sequence functions if they always cons rest args and parse keywords at runtime. I consider it an implementor's duty to try to eliminate at least some of that runtime penalty. What about "fast function calls?" Is it alright for an implementor to compile his Lisp-level code in such a way that when system code calls Print the call is untracable? Is it alright for the writer of a "module" to do so? Why should the random user who wants to trace Print have any reason to believe that the writer of the code supplied to him used that function? Sounds like information-hiding to me. Last I checked, that was suppsoed to be a good thing. The only difference here seems to be what you might consider reasonable things to open-code based on your knowledge about implementing Lisp. That's going to vary a whole lot from machine to machine. Partition the set of Common Lisp functions into three classes: 1. Those that may be open-coded. 2. Those that may either be open-coded or left alone. 3. Those that should never be open-coded. Can you? Should you? Were you the one complaining about Common Lisp discriminating against stock hardware? The sorts of optimizations I want to do (and as a user, want done for me) can only help the performance on stock hardware. If it were ruled that it was absolutely not permissable to cleverly code any keyword-taking functions, "we" microcoded-machine people would just go off and write some fast keyword-parsing microcode. What would the Vax and 20 people be left doing? Remember that as a last resort, good old DISASSEMBLE is there. There IS a way for the user to peer into compiled code to see what's really happening. Information-hiding is a good thing, but a better thing about Lisp is that it has usually provided a way around it, so that people can get work done. I've used the Lisp Machine Inspector to "unlock" a package to hack some symbols in that package. I didn't need to read any documentation or source code. There's no real reason to treat compiled code as a black box and TRACE as the only tool for finding things out about the contents of that box. Contrary to what Hedrick says, this IS Lisp. --Skef  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 11 Oct 84 18:23:53 PDT Received: from xerox.arpa by csnet-relay.arpa id a006036; 11 Oct 84 20:56 EDT Received: from Chardonnay.ms by ArpaGateway.ms ; 11 OCT 84 17:54:37 PDT Date: 11 Oct 84 17:54 PDT From: JonL.pa%xerox.arpa@csnet-relay.arpa Subject: Re: Questions about specification and possible implementations In-reply-to: Skef Wholey 's message of Tue, 9 Oct 84 17:11 EDT To: Wholey%cmu-cs-c.arpa%csnet-relay.arpa@csnet-relay.arpa cc: PERDUE%hplabs.csnet%csnet-relay.arpa%csnet-relay.arpa@csnet-relay.arpa, Common-lisp%su-ai.arpa%csnet-relay.arpa%csnet-relay.arpa@csnet-relay.arpa Skef, I see a more serious problem with TRACE than can be encompassed by your proposals. Basically, it's still the same question raised by Hedrick initially -- when is it permissible for the compiler to *rename* functions out from underneath you. The "you" in the preceeding sentence is not merely the small module of functions that some programmer happens to be working on at a given point in time. If he so foolishly wants to trace CAR (or keyword function, or whatever), *** he may just as likely want it traced in the system functions which use it, or in the funcitons from an independently supplied module. I'm not fond of constraining every function in the book -- CAR, CDR and SVREF come to mind as outrageous to be so constrained. But I do "buy" Hedricks argument about keyword-admitting and other such functions [e.g., it would not be legit to rename LDB into SI:**LOAD-BYTE simply in order to avoid the creation of a byte pointer]. -- JonL --  Received: from CMU-CS-SPICE.ARPA by SU-AI.ARPA with TCP; 10 Oct 84 09:07:31 PDT Date: Wednesday, 10 October 1984 11:58:31 EDT From: Joseph.Ginder@cmu-cs-spice.arpa To: wholey@cmu-cs-c.arpa cc: T-Users@yale.arpa, Lisp-Forum@mit-mc.arpa, Common-Lisp@su-ai.arpa Subject: UCLA Perq Message-ID: <1984.10.10.15.38.13.Joseph.Ginder@cmu-cs-spice.arpa> The Perq at UCLA, which I have seen in person, is running the Beta release of S5 with the "slasher 1" version of lisp -- that is, a pre-release, not even Beta. Thus, it does not have any of the new interfaces. I believe that it is using Ucode from before the fast funcall stuff and caching was put in, but don't recall for sure. That is, the ucode is probably 2 major revisions old; and at least 1 major revision old. Also, no one has a Perq lisp yet with the faster startup granted by using only the new interfaces. (Every version of Perq lisp that I know of still uses the old ones at least internally -- thus they must be initialized as before -- in addition to the new, fast initializing versions.) --Joe P.S. No Perq has a 68000 in it. Or a 68010. All have board-level, micro-programmable CPU's that aren't true bit-slices (like AMD2901's) but have been described as such since the chip used for the ALU is 4-bits wide and five are used to construct a 20-bit ALU. (No, I don't remember which chip, but it's not a secret. It's some TI ALU chip.) And though the current board has a 20-bit wide ALU, external data paths are 16 bits wide.  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 9 Oct 84 14:21:23 PDT Received: from cmu-cs-c.arpa by csnet-relay.arpa id a010978; 9 Oct 84 17:14 EDT Received: ID ; Tue 9 Oct 84 17:11:19-EDT Date: Tue, 9 Oct 1984 17:11 EDT Message-ID: Sender: WHOLEY%cmu-cs-c.arpa@csnet-relay.arpa From: Skef Wholey To: PERDUE%hplabs.csnet%csnet-relay.arpa@csnet-relay.arpa Cc: Common-lisp%su-ai.arpa%csnet-relay.arpa@csnet-relay.arpa Subject: Questions about specification and possible implementations In-reply-to: Msg of 8 Oct 1984 20:47-EDT from PERDUE%hplabs.csnet at csnet-relay.arpa Ok, I retract that terminology. My point was that it's just as correct to compile a keyword-taking function in a funny way as it is to open code CAR. If a compiler is allowed to do the latter, it can certainly do the former. JonL points out that this weakens the debugability of compiled code, since some poor guy out there might want to trace CAR (or some other specially compiled function) to find out what's going on. There are a few solutions to this problem: 1. He can debug his code in the interpreter -- a problem if he's already loaded a lot of compiled code. But anything calling itself a Lisp environment should give the user a way to easily load the interpreted definition of a few functions. This could be done in the editor, or perhaps grabbing the interpreted definition if the compiler had the foresight to save it someplace for him. 2. He can (PROCLAIM (NOTINLINE CAR)) and recompile the functions he's interested in. If he doesn't suspect that a function is being compiled in some funny way, he should be able to find out using DISASSEMBLE in the worst case. 3. Common Lisp could provide a declaration or somesuch that would cause the compiler to perform no special compilation of functions. When debugging, one could still get something like compiled speed while retaining full tracability. This would presumably be easier than declaring everything you might want to trace NOTINLINE . Making option 1 feasible in practice requires a good environment (read not batch) -- not something part of the Common Lisp specification. Still, it isn't that hard for an implementor to provide such tools. Option 3 requires a change to the language specification, but is probably even easier to implement that Option 1. Option 2 is the worst-case, current language spec option. Anyone for (PROCLAIM '(ENABLE-INLINE T)) and (PROCLAIM '(ENABLE-INLINE NIL))? With better names and syntax maybe? Shades of (SSTATUS FOO T)... Gasp. --Skef  Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 9 Oct 84 12:30:12 PDT Received: from hplabs by csnet-relay.csnet id ac10239; 9 Oct 84 15:12 EDT Received: by HP-VENUS id AA11495; Mon, 8 Oct 84 17:46:25 pdt Message-Id: <8410090046.AA11495@HP-VENUS> Date: 8 Oct 1984 1747-PDT From: PERDUE%hplabs.csnet@csnet-relay.arpa Subject: Re: Questions about specification and possible implementations To: Common-lisp%su-ai.arpa@csnet-relay.arpa In-Reply-To: Your message of 4-Oct-84 Source-Info: From (or Sender) name not authenticated. I consider Skef Wholey to be incorrect when he says that "it is an error" to redefine BUILT-IN keyword functions. In the published Common LISP manual on page 90 under the heading "symbol-function" I see: The global function definitiion of a symbol may be altered by using setf with symbol-function. . . . It is an error to attempt to redefine the name of a special form (see Table 5-1). Does anyone else know differently from this? -------  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 8 Oct 84 21:03:34 PDT Date: 9 October 1984 00:02-EDT From: Kent M Pitman Subject: Maclisp interpreter/compiler nearly identical w/ (declare (special t)) on?? To: fateman @ UCBDALI cc: common-lisp @ SU-AI In-reply-to: Msg of Fri 14 Sep 84 09:40:12 pdt from fateman%ucbdali at Berkeley (Richard Fateman) Date: Fri, 14 Sep 84 09:40:12 pdt From: fateman%ucbdali at Berkeley (Richard Fateman) ... If we look at Maclisp with (declare (special t)), is it not the case that (nearly) identical semantics are used for the interpreter and compiler, given CORRECT programs? ... I'm afraid I take offense at this. Maclisp interpreted semantics are *very* different than compiled semantics. Common Lisp is not completely there, but puts its heart in the right place by at least acknowledging the problem. Also, it has corrected at least some of the Maclisp problems. Here are a few Maclisp interpreter/compiler differences that come to mind at the moment to jar your memory; I'm sure this is not an exhaustive list: * If you make a "correct" number declaration for X and Y, you may risk that (PLUS X Y) which would run correctly interpreted will be open coded as if (+ X Y) had been written. In PLUS case interpreted, however, overflow becomes a bignum while in the + case interpreted, the result is undefined (wraparound or some such thing occurs undetected). I didn't see any obvious place in the CL manual where it says if this is fixed. * There are a bunch of control variables such as CAR, CDR, and EVAL which are (necessarily) ignored by compiled code. They radically affect the semantics of interpreted code by allowing the user to extend the notion of what was "defined" without providing a mechanism for informing the compiler of such extensions. There are no such variables in CL. * The status of macros in the car of a form was not uniformly treated in interpreted and compiled form. In the form ((FOO BAR) ...), if FOO was a macro, the interpreter would cause the result of expanding (FOO BAR) to be evaluated before being applied, while the compiler would compile an apply to the result of expanding (FOO BAR) directly rather than compiling a call to the result of evaluating the expansion. Early Maclisp actually attempted to define this as reasonable behavior; I think it was eventually flushed and compiler warnings were issued in some cases saying to use FUNCALL. I could not determine where or whether the CL manual takes a stand on this issue. It would be nice if it simply declared that forms other than ((LAMBDA ...) ...) whose cars are lists are explicitly undefined. * Compiled macros, if redefined, have no effect on running programs. This is still true in CL. If interpreted CL definitions were ENCLOSE'd (a la Scheme) at definition time, this could be fixed. * Constant quoted expressions which were EQUAL will have been made EQ by the FASLOAD process. This fact has been heavily depended upon by Macsyma on the pdp10 to avoid wasting huge amounts of space needlessly, and must be understood by all programs to avoid accidentally side-effecting constants which ought by right have no relation to each other. I guess this one is under active discussion. * Function definitions for interpreted and compiled code differ. (FUNCALL (GET 'X 'EXPR)) runs the same interpreted and compiled, but (DEFPROP X (LAMBDA () 3) EXPR) runs differently compiled than interpreted. There is definitely an advance in CL by having simply a function cell and eliminating this nonsense about having 18 different places to store functions. There's some question in my mind about whether macros belong in the function cell, but this is at least treated uniformly from interpreter to compiler. * EVAL-WHEN does different things in an EVAL context than a COMPILE context. I can't imagine wanting to fix this given the current state of the world, although if, as I mentioned above, CL had an analog to Scheme ENCLOSE which was used at appropriate times in both interpreter and compiler, I could imagine changing the set of keys that EVAL-WHEN used (to not be EVAL and COMPILE, but instead something more abstract) so that they were uniformly treated in interpreted and compiled code. * EVALFRAME works completely differently in compiled and interpreted contexts since call the compiler may optimize out intermediate call frames. Hence (LET ((THIS-FRAME (EVALFRAME ...))) ...) works fine compiled but must be treated very carefully interpreted. This issue isn't addressed by CL, hence is sort of fixed. * Number EQness of small numbers may not be preserved in compiled code, but is guaranteed in many cases in interpreted code. Technically, it was never guaranteed that this was so, but in practice, I (and probably others) wrote code that relied on the fact and achieved useful ends. The introduction of EQL in CL makes a positive stride in the right direction. I don't know if I consider this problem completely resolved.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 8 Oct 84 19:42:25 PDT Date: 8 October 1984 22:42-EDT From: Kent M Pitman Subject: Questions about OPEN To: Common-Lisp @ SU-AI cc: RAM @ CMU-CS-C, heller%umass-cs.csnet @ CSNET-RELAY, Moon @ SCRC-RIVERSIDE In-reply-to: Msg of Sat 6 Oct 84 00:05 EDT from David A. Moon Everyone has spoken about what they think the published word says will happen, but no one has really said anything about what users might in fact need. If I'm writing code that works on an open file and wants to know the truename, it is generally so I can either record information somewhere (eg, type "Script file PS:FOO.TEXT.7 opened." on the console) or so I can use the information in some formal operation (such as producing a new pathname from the truename to put next to it in a directory). If I care about matching version numbers for some reason (eg, I want "FOO.OUTPUT.7" to accompany "FOO.INPUT.7"), then it may then be important to me that I get back # from truename and not # and it's a pain to have to write the code that probes FOO.TEXT for numberness (or zeroness, in the case of Twenex, or...). I agree with Moon that functions that can't fulfill their contract due to OS limitations should err, but I know that I have always been willing to use heuristic information in this particular area and I think that in some cases relaxing the definitions a little could work pretty well. Eg, TRUENAME might return two values, a pathname and a boolean saying whether that pathname was really the truename or just a good guess. This, I think, would provide the user with all the information that either kind of operating system could provide without being too obtrusive. If someone thinks this would occasionally do the wrong thing, it could be safeguarded by taking an optional arg ERROR-P, default T, which if true would signal an error before returning a second value of NIL, so the person would have to do (TRUENAME obj NIL) to get back two values. I would tend to oppose solutions to these issues which will hide useful information for the sake of generality, preferring to support solutions which present all the known data to the user in a flexible way and leave him to decide about how to handle the problem. By the way, I believe that the decision to have TRUENAME err rather than return multiple values is partly influenced by the fact that the LispM now has a sophisticated condition-handling system which can make use of complicated information conveyed in error objects. Since Common-Lisp error handling primitives are not so sophisticated -- good grief, I just checked the manual and there's not even an ERRSET primitive; let me ammend that -- since Common-Lisp error handling primitives are quite literally non-existent, there is no way a user can even do: (OR (IGNORE-ERRORS (TRUENAME ...)) (PATHNAME ...)) right now if he expects a problem. But certainly if we expect this to be a common case, allowing something like the above-proposed (TRUENAME ... NIL) would be more concise and informative. -kmp  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 8 Oct 84 12:52:52 PDT Received: ID ; Mon 8 Oct 84 15:51:34-EDT Date: Mon, 8 Oct 1984 15:51 EDT Message-ID: Sender: WHOLEY@CMU-CS-C.ARPA From: Skef Wholey To: Charles Dolan Cc: Common-Lisp@SU-AI.ARPA, Lisp-Forum@MIT-MC.ARPA, T-Users@YALE.ARPA Subject: Benchmark - PERQ CL vs Apollo T In-reply-to: Msg of 8 Oct 1984 13:17-EDT from Charles Dolan Date: Monday, 8 October 1984 13:17-EDT From: Charles Dolan UCLA has a demo unit of the new PERQ 68000 based workstation running Common Lisp. The PERQ is not a 68000 based machine. There's a bit-sliced processor inside of it. It's basically a 16-bit machine. PERQ DN300 DN460 (tak ...) [3] 6.3 sec 3.4/6.3 sec [2] 1/2.7 sec [2] Tak runs in under 5 seconds on my Perq. The Perq Common Lisp implementation has been undergoing extensive tuning during the past few months, and I bet you've got a somewhat old version. The current situation is that people at CMU are still doing most of the development work, while the Lisp people at Perq systems are doing things like getting better interfaces to the operating system servers up. Over the past two weeks I've added register instructions to the Lisp instruction set (full runtime type-checking, by the way). Some benchmarky things have improved dramatically, for example, (dotimes (i 1000000)) ; That's one million took over 20 seconds before the addition of registers, and now takes 5.5 seconds. I bet the register instructions will help some of your other benchmarks as well. "Benchmarking is, at best, a black art." I'd like to see some large bechmarks run on a large number of machines (something like OPS5). I like things along the lines of compilation speed and how fast the editor reads and writes files. Those are things that most people do a lot, and spend time waiting for. Common Lisp and T are very different languages, and I bet I could devise some benchmarks that ran significantly faster on the Perq than on the Apollo machines. Have you tried CTAK (TAK using catch and throw)? I don't want to thwart your benchmarking effort, and I'm not offended or anything, but I felt I should mention that the Perq Lisp system is still in the tuning phase. --Skef  Received: from UCLA-LOCUS.ARPA by SU-AI.ARPA with TCP; 8 Oct 84 10:45:12 PDT Date: Mon, 8 Oct 84 10:17:49 PDT From: Charles Dolan To: T-Users@Yale, Lisp-Forum@MIT-MC, Common-Lisp@Su-AI Subject: Benchmark - PERQ CL vs Apollo T UCLA has a demo unit of the new PERQ 68000 based workstation running Common Lisp. We are currently using Apollo workstations and T. I compared the workstations on the following points: standard shell startup, standard editor startup, lisp editor startup, compilation, (fact 100) - recursive factorial, (tak 18 12 6) - code given below, (reverse1 *long-list*) - recursive reverse of a 100 element list, and (reverse2 *long-list*) - recursive reverse of a 100 element list using closures. The DN300 is Apollo's low end workstation. It had 1.5 MB and no local disk. The DN460 is Apollo's bit-slice implementation of the 68000 instruciton set. PERQ DN300 DN460 shell 10 sec 2/5.5 sec [5] 1/3 sec [5] editor 7 sec 1 sec 1 sec lisp editor [1] 14/1.5 sec 23.3/3 sec 10.5/1.8 sec compilation [4] 11 sec 54 sec 24 sec (fact 100) 2.1 sec 1.12 sec 0.61 sec (tak ...) [3] 6.3 sec 3.4/6.3 sec [2] 1/2.7 sec [2] (reverse1 ...) 2.2 sec 1.2 sec 0.42 sec (reverse2 ...) 2.7 sec 1.6 sec 0.67 sec All times were computed by running the funciton is a loop 30 times and dividing the wall clock time by 30. [1] Since the lisp editors are embedded in the lisp environment two times are given. The first is for the initial startup of the editor the first time it is invoked. The second is for subsequent invocations. [2] The faster of the two times times is for the case when block complilation is used. Here the recursive calls to (tak ...) are compiled as jumps. [3] In the T code explicit calls to the FIXNUM arithmetic routines 'FX<' and 'FX-' were used. [4] The which was compiled is the code below plus one auxiliary function for each benchmark which performed the loop. [5] The first time is for the AEGIS shell. The Second is for the AUX cshell. The code follows. T: (define (fact i) (cond ((eq? i 0) 1) (T (* i (fact (-1+ i)))))) Common Lisp: (defun fact (i) (cond ((eq i 0) 1) (T (* i (fact (1- i)))))) T: (define (tak x y z) (cond ((not (fx< y x)) z) (T (tak (tak (fx- x 1) y z) (tak (fx- y 1) z x) (tak (fx- z 1) x y))))) Common Lisp: (defun tak (x y z) (declare (type integer x) (type integer y) (type integer z)) (cond ((not (< y x) z) (T (tak (tak (- x 1) y z) (tak (- y 1) z x) (tak (- z 1) x y))))) T: (define (reverse1 l) (cond ((null? l) nil) (T (append (reverse1 (cdr l)) (list (car l)))))) Common Lisp: (defun reverse1 (l) (cond ((null l) nil) (T (append (reverse1 (cdr l )) (list (car l))))) T: (define (reverse2 l) (cond ((null? l) (lambda () l)) (T (lambda () (append (apply (reverse2 (cdr l)) ()) (list (car l))))))) Common Lisp: (defun reverse2 (l) (cond ((null l) (function (lambda () l))) (T (function (lamda () (append (apply (reverse2 (cdr l)) ()) (list (car l)))))))) -Charlie Dolan cpd@UCLA-LOCUS.ARPA  Received: from UCLA-LOCUS.ARPA by SU-AI.ARPA with TCP; 8 Oct 84 10:33:22 PDT Date: Mon, 8 Oct 84 10:23:36 PDT From: Charles Dolan To: common-lisp@su-ai Subject: How so I get on this List? Is this how? If not please send me more information. -Charlie Dolan cpd@UCLA-LOCUS.ARPA  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 22:55:59 PDT Received: ID ; Sat 6 Oct 84 01:55:29-EDT Date: Sat, 6 Oct 1984 01:55 EDT Message-ID: From: Rob MacLachlan To: "David A. Moon" Cc: common-lisp@SU-AI.ARPA, "Robert (LISPer 68K)Heller" Subject: Questions about OPEN In-reply-to: Msg of 6 Oct 1984 00:05-EDT from David A. Moon I think that I mostly agree with your suggestions, but there are some areas of confusion: What should the semantics of Truename be as compared to Probe-File? My original interpretation was: (defun truename (file) (or (probe-file file) (error "File ~S does not exist." file))) You seem to want to assign subtly different semantics to the two. Under your interpretation of Truename, what is the name returned when the stream is open for output and the file already existed? Is it: 1] The preexisting file? 2] The new file? 3] Either one? 4] Something else? 5] Undefined? As we have discussed at length, 2 is impossible in some OS'es. 1 is probably always possible, but might not be considered the most natural by everyone. Rob  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 21:35:28 PDT Received: from SCRC-EUPHRATES by SCRC-QUABBIN via CHAOS with CHAOS-MAIL id 89060; Sat 6-Oct-84 00:05:08-EDT Date: Sat, 6 Oct 84 00:05 EDT From: "David A. Moon" Subject: Questions about OPEN To: Rob MacLachlan , common-lisp@SU-AI.ARPA cc: "Robert (LISPer 68K)Heller" In-Reply-To: Message-ID: <841006000512.9.MOON@EUPHRATES.SCRC.Symbolics> Date: Fri, 5 Oct 1984 02:46 EDT From: Rob MacLachlan I concede those points, and was aware that, for example, Rename-File has somewhat different semantics on a stream than it has on the name. I can deal with that, since all we have to do in Spice Lisp to implement this is change the name we write the data out as when we close the file. You clarification, however, seems to assume that there is some concept of file identity independent of the file name. This is not true in Sesame, and is probably not true in many file systems. There is no way to keep track of a file across renamings and other operations. I guess we have several options here: 1] We can leave the manual the way it is, with the semantics of use of streams vis a vis pathnames largely unspecified. This would permit implementations to do arbitrarily whizzy things, but would make the use of stream arguments difficult in portable code, and thus dubious as a Common Lisp feature. 2] We can try to define the semantics. This course is fraught with peril, since capabilities of operating systems differ wildly. 3] We can punt and remove them. Is there really such a big issue here? All the functions in section 23.3 are allowed already to have implementation-dependent variations (note at the head of the section). OPEN is allowed similar freedom (note on page 421). All the other functions, other than TRUENAME, when given a stream simply convert it into a pathname. In fact I can find nothing in the manual that says how this is done! But I'm sure the intention is that PATHNAME when applied to a stream simply returns the argument that was given to OPEN to create that stream (coerced to a pathname of course). So I think the only function with a real problem is TRUENAME. I believe that the note at the front of section 23.3 ought to be applied to TRUENAME as well, word for word. In many cases, the CLM seems to do 1, on the grounds that it would be nice to have a standard way to say X, given that X is possible. I have mixed feelings about this. In the case of the :If-Exists/:If-Does-Not-Exist arguments to Open, it seems reasonable, since even an inconceivably brain-damaged OS should be able to provide some sort of reasonable default action, and the program will still work. In the case of Rename-File, though, the call will error out, (or worse, simply quietly fail) if the capability to rename open files doesn't exist. A program counting on this feature would not work, and thus would not be portable. I think that this business of streams-as-files is especially problematical in our implementation, since there is no "file attached to the stream", only a memory buffer area where you are building up a file that you may write out when the stream is closed. Sesame is in fact a system such as you mentioned, in which it is impossible to determine the truename of an "open" file, since the version number is determined when the file is written. It seems to me that this argument could just as easily be used to support the position that the feature should be flushed as being impossible to specify in a implementation independent fashion. For us, the Probe-File and Truename functions are the most annoying. The Rename-File and Delete-File functions, which don't have to work right according to the CLM, are easily implementable. The CLM doesn't say this, but I would hope that these functions are guaranteed either to do what the manual says or to signal an error, not to quietly ignore the request or anything like that. This can be pernicious -- we just found a bug of such nature this week. I believe that it is okay for a Common Lisp implementation to signal an error for any file operation, saying "the underlying file system our implementation is built on is incapable of implementing that Common Lisp function, too bad". I no more believe that Common Lisp implementors should be required to rewrite their host operating systems than I believe that they should be required to redesign their host hardware. Of course this means that some programs will be more portable than others, and some programs will port to some implementations but not others. In practice that is always the situation and there is nothing you can do about it. But I believe the note at the front of section 23.3 ought to be changed to say that these functions will either work or signal an error. None of them need "is an error" freedom since they aren't operations whose efficiency matters. Currently, our implementation does not obey the (probe-file (open ...)) => t invariant. We could dummy it up so that it would return the pathname that it thinks it is going to write the file out to, but that would cause anomalous behavior such as: (probe-file stream) => (probe-file ) => nil I believe the statement under PROBE-FILE, "Note that if the file is an open stream associated with a file, then probe-file cannot return NIL but will produce the true name of the associated file" to be bogus. I think PROBE-FILE should simply call PATHNAME to coerce its argument to a pathname, as the functions in 23.1 and 23.2 do, rather than treating streams specially, as the other functions in 23.3 do. I think your argument above is convincing and my suggested change would make things more consistent.  Received: from MIT-MC.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 20:46:32 PDT Received: from SCRC-EUPHRATES by SCRC-QUABBIN via CHAOS with CHAOS-MAIL id 89058; Fri 5-Oct-84 23:46:04-EDT Date: Fri, 5 Oct 84 23:46 EDT From: "David A. Moon" Subject: Re: Questions about OPEN To: Charles Hedrick , common-lisp@SU-AI.ARPA cc: heller%umass-cs.csnet@CSNET-RELAY.ARPA In-Reply-To: The message of 5 Oct 84 16:36-EDT from Charles Hedrick Message-ID: <841005234616.8.MOON@EUPHRATES.SCRC.Symbolics> Date: 5 Oct 84 16:36:33 EDT From: Charles Hedrick ....At the moment rename-file implies a close in our implementation. This may be a bug. (Would someone from the gang of 5 like to comment?) I don't think anything but the CLOSE function should close a stream. So it's a bug. By the way, a suggestion for doing truename on systems where you don't know the name until close: If you have some way of making a file invisible (e.g. by setting file protection so that no one can see it), you could close it when it is initially opened, make it invisible, and then open it for append access. This is roughly what Tops-20 does internally. (Tops-20 has the same problem that this scheme would have: if the system crashes while you are creating a file, you have this partially-created file lying around which you may need to use unusual methods to get rid of.) I think it would be better to define the language to make as few assumptions about the operating system as possible, rather than requiring some implementors to go through contortions to make their file systems look like the file systems of other implementors. Obviously this has to stop before you reach a point where no one can write a portable program because the language says nothing about the semantics of any file system operation! In the particular case of TRUENAME on an output stream that created a new file, I think it would be wisest to say that this operation "is an error" until the stream has been closed. Any system that knows the name earlier than that can simply not check for the "error", and can save the name in the stream at the time it closes it, for later use by the TRUENAME function.  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 18:15:09 PDT Received: from Xerox.ARPA by SU-SCORE.ARPA with TCP; Fri 5 Oct 84 17:49:02-PDT Received: from Semillon.ms by ArpaGateway.ms ; 05 OCT 84 17:49:35 PDT Date: 5 Oct 84 17:49 PDT From: JonL.pa@XEROX.ARPA Subject: Re: Aside about encapsulations In-reply-to: Steven 's message of Fri, 5 Oct 84 13:35 EDT To: Handerson@CMU-CS-C.ARPA cc: JonL.pa@XEROX.ARPA, COMMON-LISP%SU-AI@SU-SCORE.ARPA, HEDRICK@RUTGERS.ARPA, Moon@SCRC-RIVERSIDE.ARPA, Rem%IMSSS@SU-SCORE.ARPA, Wholey@CMU-CS-C.ARPA Sounds quite interesting; two questions: 1) does your encapsulation notion bear any resemblence to the one described in the LispMachineMaunal? 2) what do *you* mean by traceing any setfable "location"? I can imagine tracing being applicable to SETF instances; and there are the "good old days" of memory address traps wherein one could invoke a function whenever any access is made to a specific address. Finally, I might mention that "active values" is at the core of LOOPS, and there is a minimal syntax in LOOPS for specifying "auditing" (similar to traceing) a particular variable, or a particular record slot. "Active values" nest in a way reminiscent of the way that encapsulations work in the Lisp Machine. -- JonL --  Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 13:37:34 PDT Date: 5 Oct 84 16:36:33 EDT From: Charles Hedrick Subject: Re: Questions about OPEN To: common-lisp@SU-AI.ARPA, heller%umass-cs.csnet@CSNET-RELAY.ARPA, Moon@SCRC-RIVERSIDE.ARPA In-Reply-To: Message from "Rob MacLachlan " of 5 Oct 84 15:49:00 EDT Maybe I didn't make it clear why I sent the message. I thought somebody was suggesting that we should try to separate functions that take stream arguments from those that take filenames (pathnames). In this case I would expect that read and write would take streams, and rename and delete would take filenames. If nobody proposed this, then my answer is genuinely devoid of usefulness. I was arguing that it is useful to continue to have rename-file and delete-file take either streams or filenames. I assume I don't have to convince you that they should take filenames. Sometimes we just want to rename a file, and if it isn't open, we don't want to bother having to open it, rename it, and close. The argument is (or at least I was imagining that it is) about whether rename and delete should be legal on streams. I claim that they should, for the very case that concerns you: I may not know the name of the file that I want to rename. Thus it is useful to have an operation that requests changing its name to something new, without reference to any name that it may happen to have now. In case you don't know the name of a file until you close it, you have at least two choices: - close it, remember the name it got assigned (I trust you can find out the name of a file when you close it?), rename it, and if the semantics of rename-file are deemed to require this, reopen it. - if your O.S. lets you not specify the name until you close it, don't do anything to the file -- just change the name you are going to give it when it finally gets closed. I certainly don't have any solution to the truename problem. I understand that there is a genuine problem with that function on some O.S.'s. I was trying to point out that allowing operations on open streams can in some cases make the situation a bit better because we don't ever need to know the old name of the file. I have thought about your question as to when the JFN should be released. I intended it to get released when you closed the stream. At the moment rename-file implies a close in our implementation. This may be a bug. (Would someone from the gang of 5 like to comment?) But that is not relevant to this question. I was trying to preserve the ability to do certain operations on a stream while it is still open, not say they should never get closed. By the way, a suggestion for doing truename on systems where you don't know the name until close: If you have some way of making a file invisible (e.g. by setting file protection so that no one can see it), you could close it when it is initially opened, make it invisible, and then open it for append access. This is roughly what Tops-20 does internally. (Tops-20 has the same problem that this scheme would have: if the system crashes while you are creating a file, you have this partially-created file lying around which you may need to use unusual methods to get rid of.) -------  Received: from SU-SCORE.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 13:33:30 PDT Received: from CMU-CS-C.ARPA by SU-SCORE.ARPA with TCP; Fri 5 Oct 84 13:03:44-PDT Received: ID ; Fri 5 Oct 84 15:29:00-EDT Date: Fri, 5 Oct 1984 13:35 EDT Message-ID: From: Steven To: JonL.pa@XEROX.ARPA Cc: COMMON-LISP%SU-AI@SU-SCORE.ARPA, HEDRICK@RUTGERS.ARPA, Moon@SCRC-RIVERSIDE.ARPA, Rem%IMSSS@SU-SCORE.ARPA, Wholey@CMU-CS-C.ARPA Subject: Aside about encapsulations In-reply-to: Msg of 4 Oct 1984 22:11-EDT from JonL.pa at XEROX.ARPA I just wrote an encapsulation package for Spice Lisp that can trace an arbitrary setfable location. The encapsulation is a closure that does all the real work (normally, calling the encapsulation functions, but also adding and deleting encapsulation functions and deleting itself). Most implementations could probably use this to encapsulate the expansion function for a macro [we use (MACRO . ) on the function cell]. This encapsulation can wrap the result of the expansion with arbitrary forms, in order to "encapsulate the macro call." [Probably not desirable, since compiled code would contain the encapsulation...] Someday soon I'll finish the job and write a trace package. The thing about encapsulating special forms is that the encapsulation function has to be a special form. All the information could be stored elsewhere, but you have to know which special form you're talking about. I haven't thought up a way to do this, unless we add closure special forms to the system (looks hard) or the encapsulation function examines the stack (eeew grossss). Anybody? -- Steve  Received: from CMU-CS-C.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 12:50:10 PDT Received: ID ; Fri 5 Oct 84 15:49:20-EDT Date: Fri, 5 Oct 1984 15:49 EDT Message-ID: From: Rob MacLachlan To: Charles Hedrick Cc: common-lisp@SU-AI.ARPA, heller%umass-cs.csnet@CSNET-RELAY.ARPA, Moon@SCRC-RIVERSIDE.ARPA Subject: Questions about OPEN In-reply-to: Msg of 5 Oct 1984 13:09-EDT from Charles Hedrick You don't seem to follow what Moon and I were saying about truename (and probe-file) -- in some filesystems it is TOTALLY IMPOSSIBLE with *any* amount of code to implement the semantics described in the manual. The usual cause for this is that the version number is unknown until the stream is closed. A secondary problem is that in some OS'es, a newly created file does not exist in the filesystem namespace until it is closed. In any case, rename-file can only be done on *open* files according to the CLM. And if you keep the JFN around for every file stream there has ever been, when are you going to do the RLJFN? Rob  Received: from RUTGERS.ARPA by SU-AI.ARPA with TCP; 5 Oct 84 10:18:13 PDT Date: 5 Oct 84 13:09:40 EDT From: Charles Hedrick Subject: Re: Questions about OPEN To: RAM@CMU-CS-C.ARPA cc: Moon@SCRC-RIVERSIDE.ARPA, common-lisp@SU-AI.ARPA, heller%umass-cs.csnet@CSNET-RELAY.ARPA In-Reply-To: Message from "Rob MacLachlan " of 5 Oct 84 02:46:00 EDT On the DEC-20 it would certainly be better to maintain the distinction between filename and stream. A common strategy for an editor is to write a new copy of a file with a temporary file name, close it, and then rename it on top of the original. This minimizes the probability of damage in a crash. One would prefer to use the same JFN for this entire process. A JFN is a small integer that is a handle on a file. If you keep the JFN active, you are sure that nobody else is going to rename the file out from under you or do anything else nefarious. This is easy if all of the operations can be done on streams. My feeling is that the distinction makes sense on some OS's, and on others causes no harm (other than extra code), so it should be kept. -------