Received: from SU-AI.ARPA by AI.AI.MIT.EDU 18 Jun 86 15:40:40 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 18 Jun 86 12:28:22 PDT Received: from MIT-CHERRY.ARPA by MIT-LIVE-OAK.ARPA via CHAOS with CHAOS-MAIL id 2798; Wed 18-Jun-86 15:30:07-EDT Date: Wed, 18 Jun 86 15:28 EDT From: Soley@MIT-XX.ARPA Subject: Re: Out-of-range subsequences To: preece%ccvaxa@GSWD-VMS.ARPA, COMMON-LISP@SU-AI.ARPA In-Reply-To: <8606181414.AA11649@gswd-vms.ARPA> Message-ID: <860618152801.2.SOLEY@MIT-CHERRY.ARPA> Message-Id: <8606181414.AA11649@gswd-vms.ARPA> Date: Wed, 18 Jun 86 09:14:35 cdt From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece) To: COMMON-LISP@su-ai.arpa Subject: Re: Out-of-range subsequences > > From: Daniels.pa@Xerox.COM > > In particular, is (subseq #(1 2 3) 0 5) an error? ---------- If you clarify it to require and in bounds endpoint, you need to do two operations. So what, it's only (subseq a 0 (min 5 (length a))). No big deal, but I don't think many users would find it confusing that the operation stops at end of string if the endpoint is out of bounds. I think they would. (subseq a 0 5) should always return a string of length 5. Under your scheme, it might return a shorter string. -- Richard  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 18 Jun 86 12:16:01 EDT Received: from IMSSS by SU-AI with PUP; 18-Jun-86 09:04 PDT Date: 18 Jun 1986 0903-PDT From: Rem@IMSSS Subject: Achieving portability, how? To: COMMON-LISP@SU-AI There seem to be several ways to achieve portability: (1) Require every implementation to provide a pure-COMMON-LISP package that has no extensions. If a program runs in this it will run anywhere. (2) Require one master host to provide a pure-COMMON-LISP package. Everyone wishing to test portability must FTP the program to that host and test portability there. (3) Provide tools for examining the text (source) of a program and reporting any non-portable syntax. These tools could be used anywhere. (4) Provide hardcopy documentation as to what may be an extension in any given implementation, but leave it to users to manually search their files for such non-portable usage. The recent concensus seems to be a combination of (1) and (4). Provide a pure-COMMON-LISP package, but it may contain extra keyword arguments beyond the spec, so users still have to manually search the manual for this documentation and then manually search their programs for usage of the extra stuff. I think (4) in any signifincant amount is unacceptable. Either require the pure-COMMON-LISP package to be identically the CLtL with no extensions whatsoever, or use (3) instead of (4) to check for the exceptions; or use (3) from the start with no deliberate exceptions. Of course there will be some non-portable code that looks portable, nothing is perfect, but an automated means that is as close to perfect portability-testing as the state of the art permits should be our goal, instead of some half-baked idea that is known from the start to be flawed in many ways. (Opinion of REM%IMSSS@SU-AI) -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 18 Jun 86 11:41:29 EDT Received: from HUDSON.DEC.COM by SU-AI.ARPA with TCP; 18 Jun 86 08:31:22 PDT Date: 18 Jun 86 11:23:00 EST From: "BIZET::BUY" Subject: packages and portability To: "common-lisp" Reply-To: "BIZET::BUY" This message is in reply to two previous messages by John Foderaro about packages and portability. > >> In summary, I > >> think that the most reliable way to deal with white page extended > >> objects is to provide two distinct definitions, one for each of the > >> two packages. Although double definitions may not appear to be > >> necessary for the time being and may be tedious to implement, I think > >> that they should not be precluded a priori from future > >> implementations. > > We agree that double definitions aren't needed now and would be > tedious to implement. I personally feel that they are also too > dangerous. If it was determined that adding something like the > :cross-reference argument to compile-file was an illegal extension, > then I would rather create a new function (excl:excl-compile-file) than > create a double definition for compile-file. > > What extensions do you see in the future that would make double > definitions necessary? > > -john foderaro > franz inc. > I think that site specific extensions may be of various kinds. For example, the VAX LISP implementation of FORMAT includes 8 directives in addition to those specified in CLtL. In the same implementation, APROPOS and APROPOS-LIST make use of DO-SYMBOLS rather than DO-ALL-SYMBOLS, as specified in CLtL. In ZETALISP the special form IF can take more than three arguments and arguments after the second are assumed to be the ELSE clause with an implicit PROGN. Other kinds of site specific extensions are additional keyword arguments to CLtL functions, additional values returned by CLtL functions, extensions to valid input argument types for CLtL functions. Let us assume that 'X' is a symbol whose CLtL definition was extended by a particular implementation of Common LISP and let us consider the issue of what package should contain X. If X is contained in the extended package only (for portability), the pure Common LISP package will be incomplete with respect to the specification in CLtL. Conversely, if X is contained in the pure Common LISP package (for completeness of an implementation), users may end up inadvertently using site specific extensions and thus writing non-portable code. Consequently, the approach of providing two distinct implementations for certain extended CLtL objects may facilitate writing portable code. > ... let's step back and look at the big > picture. The DEC proposal and our proposal are trying to solve the same > problem: creation of a lisp system in which is possible to write portable > programs as well as implementation specific (non-portable) program. > > .... > > Under both proposals it is easy to write portable code. To verify that > your code is portable you really have to check your source to see if > you've used any extensions. Under the DEC proposal the system will > catch non-portabilities in the CLtL functions as the code is running, > but of course it won't catch them all unless you exercise every path of > the code. In the ExCL proposal you have to take the handy 'extensions > sheet' and check by hand that you haven't used any extensions. Of > course, the careful programmer will simply not use extensions in > portable code. > > ... > > Under the ExCL proposal it is simple and natural to switch between the > extended and portable modes. Under the DEC proposal the procedure is > more difficult. > > The ExCL proposal shows off the power of the package system in the ability > to easily move into and out of the extended environment. The DEC > proposal shows off one of the big misfeatures of the package system: > that you have to go to so much trouble if you merely want to shadow > one symbol in the lisp package. Do we want users to emulate this > kind of package construction? > In our opinion, the model we proposed represents a reasonable compromise between ease of use and flexibility. I don't think that the procedure to switch between the extended and the portable environment is so difficult as to constitute a serious drawback (see my previous messages about this). Moreover, the model deals uniformly with double definitions for objects defined in CLtL. In addition, this model supports effectively writing portable code. I obviously agree about the fact that run-time checks for portable code are indeed useful. Compile-time checks may contribute even more significantly to the detection of non-portable code. > If we ever want to do double definitions in the future, then the DEC > proposal sets up the framework. The ExCL proposal doesn't prevent > double definitions but it doesn't set it up either. If double > definitions will be important in the future in order to implement > extensions then we should be able to come up with a few examples > where they would be important. I can't think of any. As I see it, > either the extension is so trivial you can add it to the standard > definition, or the extension is so large thfunction to avoid confusion. In our opinion, the Franz proposal indeed renders double definitions substantially more inconvenient to use when trying to switch from a pure Common LISP to an extended environment and vice versa. I discussed the reason for this in a previous message. Thanks for your attention and again sorry for the length of this message, Ugo Buy ARPAnet: BUY%BARTOK.DEC@DECWRL.ARPA ------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 18 Jun 86 10:27:12 EDT Received: from GSWD-VMS.ARPA by SU-AI.ARPA with TCP; 18 Jun 86 07:15:43 PDT Received: from ccvaxa.GSD (ccvaxa.ARPA) by gswd-vms.ARPA (5.9/) id AA11649; Wed, 18 Jun 86 09:14:40 CDT Message-Id: <8606181414.AA11649@gswd-vms.ARPA> Date: Wed, 18 Jun 86 09:14:35 cdt From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece) To: COMMON-LISP@su-ai.arpa Subject: Re: Out-of-range subsequences > > From: Daniels.pa@Xerox.COM > > In particular, is (subseq #(1 2 3) 0 5) an error? ---------- Well, I agree that it ought to be clarified, so that we all to the same thing. I'm less sure that I agree it should signal an error, since the user could reasonably want to take a subsequence beginning at a certain point and running up to a certain length or the end of the strnig, whichever comes first. If subseq is allowed to stop at end-of-string you can do that (although the endpoint dangling out in space is an unpretty way of specifying what you mean as a length). If you clarify it to require and in bounds endpoint, you need to do two operations. No big deal, but I don't think many users would find it confusing that the operation stops at end of string if the endpoint is out of bounds. I'd prefer the clarification to say that the subsequence extends to the end of the sequence or to the specified endpoint, whichever is less, but it's not a big deal if the universal preference is for pickiness over (marginal) convenience. -- scott preece gould/csd - urbana ihnp4!uiucdcs!ccvaxa!preece  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 18 Jun 86 08:29:44 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 18 Jun 86 05:23:47 PDT Received: from CHICOPEE.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 24177; Wed 18-Jun-86 08:22:54 EDT Date: Wed, 18 Jun 86 08:24 EDT From: Daniel L. Weinreb Subject: Out-of-range subsequences To: Daniels.pa@Xerox.COM, common-lisp@SU-AI.ARPA In-Reply-To: <860617-163623-1048@Xerox> Message-ID: <860618082448.8.DLW@CHICOPEE.SCRC.Symbolics.COM> Date: 17 Jun 86 14:43 PDT From: Daniels.pa@Xerox.COM In particular, is (subseq #(1 2 3) 0 5) an error? What an excellent example of the sort of thing that the manual really needs to be more explicit about. Our implementation signals an error. It certainly seems to me that it ought to.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 23:49:55 EDT Received: from [128.32.130.7] by SU-AI.ARPA with TCP; 17 Jun 86 20:33:52 PDT Received: by kim.Berkeley.EDU (5.51/1.14) id AA22650; Tue, 17 Jun 86 20:34:13 PDT Received: from fimass by franz (5.5/3.14) id AA02069; Tue, 17 Jun 86 20:23:11 PDT Received: by fimass (5.5/3.14) id AA16275; Tue, 17 Jun 86 19:21:09 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8606180321.AA16275@fimass> To: Alan Snyder Cc: common-lisp@su-ai.arpa Subject: Re: packages and portability In-Reply-To: Your message of Tue, 17 Jun 86 09:58:20 GMT. <8606171658.AA00290@hplsny> Date: Tue, 17 Jun 86 19:21:03 PST >> Hmmm. I always thought that one of the virtues of having a package system is >> that you don't have to resort to sticking prefixes on your function names. >> Defining distinct symbols LISP:COMPILE-FILE and EXCL:COMPILE-FILE seems >> cleaner to me, if the intent is that EXCL:COMPILE-FILE is a >> localized version >> of LISP:COMPILE-FILE. I think we should support this sort of thing. >> ------- In this particular case, I think that it should be legal for me to add the :cross-reference argument to the compile-file function in the lisp package. Do you agree? Thus I would never have to create the function excl-compile-file. However if I decide that I'd like compile-file to have the form (compile-file filename &rest functions) and to compile only those functions listed in the named file, then I've made a significant change to compile-file and I should give it a new name (like excl:excl-compile-file). Using excl:compile-file would just be confusing. This example is pretty trivial, let's step back and look at the big picture. The DEC proposal and our proposal are trying to solve the same problem: creation of a lisp system in which is possible to write portable programs as well as implementation specific (non-portable) program. The DEC proposal does this by creating a package of functions which follow the spec set out in CLtL, and do nothing more. Functions which have extended behavior have the same name and are found in another package. Nothing is said about how different the extended functions can be from the original ones. The ExCL proposal is to have a single function to implement each of the CLtL functions and their extensions, and the only extensions that are permitted are those that are upward compatible (and typically involve adding a keyword argument). Furthermore the number of such extensions will be small and well documented. Under both proposals it is easy to write portable code. To verify that your code is portable you really have to check your source to see if you've used any extensions. Under the DEC proposal the system will catch non-portabilities in the CLtL functions as the code is running, but of course it won't catch them all unless you exercise every path of the code. In the ExCL proposal you have to take the handy 'extensions sheet' and check by hand that you haven't used any extensions. Of course, the careful programmer will simply not use extensions in portable code. Under the ExCL proposal it is simple and natural to switch between the extended and portable modes. Under the DEC proposal the procedure is more difficult. The ExCL proposal shows off the power of the package system in the ability to easily move into and out of the extended environment. The DEC proposal shows off one of the big misfeatures of the package system: that you have to go to so much trouble if you merely want to shadow one symbol in the lisp package. Do we want users to emulate this kind of package construction? If we ever want to do double definitions in the future, then the DEC proposal sets up the framework. The ExCL proposal doesn't prevent double definitions but it doesn't set it up either. If double definitions will be important in the future in order to implement extensions then we should be able to come up with a few examples where they would be important. I can't think of any. As I see it, either the extension is so trivial you can add it to the standard definition, or the extension is so large that you'd be better off renaming the extended function to avoid confusion. Sorry for being so longwinded, I think that this is one of the more important issues to be resolved. -john foderaro franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 22:26:23 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 17 Jun 86 19:15:29 PDT Received: ID ; Tue 17 Jun 86 22:15:12-EDT Date: Tue, 17 Jun 1986 22:15 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: Daniels.pa@XEROX.COM Cc: common-lisp@SU-AI.ARPA Subject: Out-of-range subsequences In-reply-to: Msg of 17 Jun 1986 17:43-EDT from Daniels.pa at Xerox.COM Is it an error for a subsequence description to index elements that are "off the end" of a sequence? By analogy with ELT, it should be, but I have seen a couple of implementations that quietly take the intersection of the intervals and go on from there? This would be true for SUBSEQ as well as any sequence function that allows :START and :END keywords. In particular, is (subseq #(1 2 3) 0 5) an error? I don't see any clear statement in the book about this, but in my opinion this should "be an error" and should probably be required to signal an error. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 20:47:32 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 17 Jun 86 17:36:17 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 17 JUN 86 16:36:23 PDT Date: 17 Jun 86 14:43 PDT From: Daniels.pa@Xerox.COM Subject: Out-of-range subsequences To: common-lisp@su-ai.arpa cc: Daniels.pa@Xerox.COM Message-ID: <860617-163623-1048@Xerox> Is it an error for a subsequence description to index elements that are "off the end" of a sequence? By analogy with ELT, it should be, but I have seen a couple of implementations that quietly take the intersection of the intervals and go on from there? This would be true for SUBSEQ as well as any sequence function that allows :START and :END keywords. In particular, is (subseq #(1 2 3) 0 5) an error? -- Andy. --  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 20:19:51 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 17 Jun 86 17:08:54 PDT Received: ID ; Tue 17 Jun 86 20:07:48-EDT Date: Tue, 17 Jun 1986 20:07 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: David Singer Cc: common-lisp@SU-AI.ARPA Subject: The Reader and lower-case characters In-reply-to: Msg of 17 Jun 1986 16:02-EDT from David Singer There is no way specified in Common Lisp to tell the reader not to transform symbols to upper case, though some implementations provide this facility. For an individual symbol, you could put vertical bars around it or call INTERN yourself with the mixed-case string. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 16:42:46 EDT Received: from SRI-KL.ARPA by SU-AI.ARPA with TCP; 17 Jun 86 13:36:36 PDT Date: Tue 17 Jun 86 13:02:43-PDT From: David Singer Subject: The Reader and lower-case characters To: common-lisp%SU-AI@SRI-KL cc: DSinger@SRI-KL Forgive me if I have missed something, or if this has been aired before ... How does one persuade the reader not to uppercase lowercase characters in symbols. The reader description, page 337 steps 7 and 8, does not make the uppercasing optional, and there doesn't seem to be an easy way to change that. (I'm translating a lisp-like language which differentiates case). Do I really have to build my own reader, or have I missed an easy way out? Thanks Dave Singer -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 13:10:05 EDT Received: from HPLABS.HP.COM by SU-AI.ARPA with TCP; 17 Jun 86 10:00:16 PDT Received: from hplsny by hplabs.HP.COM ; Tue, 17 Jun 86 09:59:15 pdt Received: by hplsny ; Tue, 17 Jun 86 09:58:58 pdt From: Alan Snyder Message-Id: <8606171658.AA00290@hplsny> Date: Tuesday, June 17, 1986 09:58:20 Subject: Re: packages and portability To: franz!fimass!jkf@kim.Berkeley.EDU Cc: common-lisp@su-ai In-Reply-To: Your message of 16-Jun-86 19:27:49 X-Sent-By-Nmail-Version: 04-Nov-84 17:14:46 We agree that double definitions aren't needed now and would be tedious to implement. I personally feel that they are also too dangerous. If it was determined that adding something like the :cross-reference argument to compile-file was an illegal extension, then I would rather create a new function (excl:excl-compile-file) than create a double definition for compile-file. Hmmm. I always thought that one of the virtues of having a package system is that you don't have to resort to sticking prefixes on your function names. Defining distinct symbols LISP:COMPILE-FILE and EXCL:COMPILE-FILE seems cleaner to me, if the intent is that EXCL:COMPILE-FILE is a localized version of LISP:COMPILE-FILE. I think we should support this sort of thing. -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 12:03:52 EDT Received: from USC-ISI.ARPA by SU-AI.ARPA with TCP; 17 Jun 86 08:53:24 PDT Date: 17 Jun 1986 11:31-EDT Sender: VERACSD@USC-ISI.ARPA Subject: Re: packages and portability From: VERACSD@USC-ISI.ARPA To: buy%bach.decnet@HUDSON.DEC.COM Cc: common-lisp@SU-AI.ARPA Message-ID: <[USC-ISI.ARPA]17-Jun-86 11:31:19.VERACSD> In-Reply-To: The message of 16 Jun 86 19:58:00 EST from "BACH::BUY" What is a "white page object" ? Thanks in advance. -- Cris Kobryn  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 11:16:51 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 17 Jun 86 08:02:52 PDT Received: ID ; Tue 17 Jun 86 10:59:55-EDT Date: Tue, 17 Jun 1986 10:59 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: "Brandon P. Cross" Cc: common-lisp@SU-AI.ARPA Subject: Common LISP meeting at AAAI In-reply-to: Msg of 17 Jun 1986 08:14:49 CT from Brandon P. Cross We're trying (rather hurriedly and belatedly) to get our act together on activities at the Lisp Conference. It turns out that there's little free time during the cofnerence and a lot of people have rather tight travel plans before and after. As of now, it looks like there will not be a mass meeting on Common Lisp of the sort we had last January. There probably will be a meeting on object-oriented facilities for Common Lisp on the afternoon of Wednesday, Aug 6, after the conference is officially over. Dick Gabriel is working on setting this up. The steering and technical committees will be meeting during the conference and we expect to have some joint meetings with the people working on standardization in Europe and Japan. I believe that the first official meeting of the X3J13 subcommittee will be in Washington on Sept 23 and 24. There will be a handout at the Lisp conference describing how people can participate in this activity. (People plugged into this mailing list will see this information electronically.) -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 17 Jun 86 09:26:39 EDT Received: from IBM.COM by SU-AI.ARPA with TCP; 17 Jun 86 06:18:44 PDT Date: 17 June 1986, 08:14:49 CT From: "Brandon P. Cross" To: common-lisp@SU-AI.ARPA Message-Id: <061786.081454.brandon@ibm.com> Subject: Common LISP meeting at AAAI Has anyone thought about when any Common LISP meetings will take place during AAAI? Brandon Cross  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 23:45:48 EDT Received: from KIM.Berkeley.EDU by SU-AI.ARPA with TCP; 16 Jun 86 20:34:07 PDT Received: by kim.Berkeley.EDU (5.51/1.14) id AA03623; Mon, 16 Jun 86 20:34:29 PDT Received: from fimass by franz (5.5/3.14) id AA07481; Mon, 16 Jun 86 20:29:56 PDT Received: by fimass (5.5/3.14) id AA14485; Mon, 16 Jun 86 19:27:55 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8606170327.AA14485@fimass> To: Cc: common-lisp@su-ai.arpa Subject: Re: packages and portability In-Reply-To: Your message of 16 Jun 86 19:58:00 EST. <8606170042.AA01941@kim.Berkeley.EDU> Date: Mon, 16 Jun 86 19:27:49 PST >> I agree about the fact that the uniformity of your package model is >> indeed appealing, however this model may prove inconvenient to use in >> the case of symbols that have two distinct definitions, one each in >> the pure Common LISP and in the extended package. In fact, I think >> that a desirable feature of the future package configuration is not >> to preclude the possibility for a symbol to be defined in both the >> pure Common LISP and the extended package. I don't like to preclude anything either since I can't predict the future nor do I know the requirements of other implementations, but I know that right now what I've described is the better setup for our implementation. We don't have (and never plan to have) any external symbols in common between the lisp package and the excl package (nor any of the other extension packages), thus users can use and unuse packages with any problems. >> In addition, it is not immediately obvious to me that the package >> name EXCL is going to be exclusively used by Franz: it sounds like an >> acronym for extended Common LISP and may therefore be used by >> multiple implementations. By the same token, 'vax-lisp' is a pretty generic package name, and I might want to use it to store my vax-only lisp functions. I think that with a little cooperation we can manage to avoid stepping on eachother's package names. >> I agree that extensions to white page objects should be upward >> compatible, however I don't think it is possible to establish a fixed >> upper bound on such extensions and on the amount of documentation >> for each of the extensions (e.g. the VAX LISP User Guide >> documentation for COMPILE-FILE is three pages long). I didn't mean to limit the length of the documentation of the extended functions, just that it is our goal that the actual list of changes will fit on one page. An example of a change is compile-file: takes an extra keyword argument :cross-reference The documentation that describes compile-file is elsewhere. The user interested in portabily can search his code to see if the :cross-reference keyword is passed to compile-file. >> In summary, I >> think that the most reliable way to deal with white page extended >> objects is to provide two distinct definitions, one for each of the >> two packages. Although double definitions may not appear to be >> necessary for the time being and may be tedious to implement, I think >> that they should not be precluded a priori from future >> implementations. We agree that double definitions aren't needed now and would be tedious to implement. I personally feel that they are also too dangerous. If it was determined that adding something like the :cross-reference argument to compile-file was an illegal extension, then I would rather create a new function (excl:excl-compile-file) than create a double definition for compile-file. What extensions do you see in the future that would make double definitions necessary? -john foderaro franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 23:08:40 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 16 Jun 86 19:58:03 PDT Received: ID ; Mon 16 Jun 86 22:52:47-EDT Date: Mon, 16 Jun 1986 22:52 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: David Bein Cc: common-lisp@SU-AI.ARPA Subject: compiled-function-p In-reply-to: Msg of 16 Jun 1986 16:46-EDT from David Bein Is compiled-function-p supposed to answer true for FEXPRs? What should compiled-function-p do when it comes across some evil looking thing which represents a compiled-closure and which is truly funcall'able like a regular compiled lambda body? There's no such thing as a FEXPR in Common Lisp, though some implementations may use such a thing internally in implementing special forms. I'll assume that you mean "special form" or "special operator" or whatever we are calling it. I think that Compiled-Function-P and Function-P should answer NIL for special forms. Compiled-Function-P should answer T for compiled closures that are callable like functions. This whole business of what is a function needs work. It will be on the agenda of things to be discussed and clarified. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 20:39:36 EDT Received: from HUDSON.DEC.COM by SU-AI.ARPA with TCP; 16 Jun 86 17:27:40 PDT Date: 16 Jun 86 19:58:00 EST From: "BACH::BUY" Subject: packages and portability To: "common-lisp" Reply-To: "BACH::BUY" It seems that my previous message was mistankenly sent three times to the mailing list and I would like to apologize for any confusion that that may have caused. I will make sure only one copy of this message is mailed out. This is in reply to John Foderaro's message about the package configuration adopted by Franz for their implementation of Common LISP. A number of points were brought up evidencing differences between the DEC proposed configuration and the Franz configuration. > 2. The 'excl' package contains external symbols which are distinct from > those of the 'lisp' package, and these symbols represent most of > our extensions. We thus agree with DEC that each implementation > should define an extension package with a unique name. We don't > think that the lisp package symbols should be imported into this > package (more on this below). >... > > 4. The 'user' package uses both the 'lisp' and 'excl' packages in the > version of common lisp we ship. It is a simple matter to unuse > the 'excl' package in your init file (or globally for all users) > if you want a pure common lisp. [as long as you don't use the > lisp package function extensions documented it the book] > ... > > In point (2) I mentioned that the excl package doesn't contain the > lisp package symbols. This permits you to go into extended mode with > a use-package and out of extended mode with unuse-package (in the DEC > mode, the package switch is more complex). But more important, I > think that our way is 'extensible' to the future when extensions come > in their own package. In fact right now we have our standard > extensions in the 'excl' package, but we also have a > 'foreign-functions' package, two graphics packages, a flavors > package, a 'cross-reference' package, etc. In our model, you can > simply use the packages you want to work in. The excl package > extensions aren't treated any differently than the other extensions > and I don't see why they should be. I agree about the fact that the uniformity of your package model is indeed appealing, however this model may prove inconvenient to use in the case of symbols that have two distinct definitions, one each in the pure Common LISP and in the extended package. In fact, I think that a desirable feature of the future package configuration is not to preclude the possibility for a symbol to be defined in both the pure Common LISP and the extended package. According to your model, in the case of symbols defined in both packages, the creation of an extended environment (similar to the USER package) always results into name conflicts, unless shadowing is used, which may be tedious to do. Furthermore, the resolution of a name conflict between two external symbols inherited by the using package may cause one of the two symbols to be imported (CLtL p. 180), which will render such a symbol insensitive to subsequent UNUSE-PACKAGE operations (CLtL p.177). So, should name conflicts caused by (USE-PACKAGE 'excl) be resolved in favor of the extended symbols, subsequent (UNUSE-PACKAGE 'excl) will not get rid of the extended definitions for such symbols. The model we proposed can deal uniformly and (relatively conveniently) with symbols defined in one or both of the two packages. To switch from an entended to a pure Common LISP environment: (UNUSE-PACKAGE 'VAX-LISP) (COMMON-LISP:USE-PACKAGE 'COMMON-LISP) Viceversa, to switch from a Common LISP to an extended environment: (UNUSE-PACKAGE 'COMMON-LISP) (COMMON-LISP:USE-PACKAGE 'VAXLISP) In addition, it is not immediately obvious to me that the package name EXCL is going to be exclusively used by Franz: it sounds like an acronym for extended Common LISP and may therefore be used by multiple implementations. > 3. If extensions must be made to the functions in the book, (i.e. > symbols in the lisp package) they are made only where upward > compatibility is preserved, and it is clearly documented in the > manual that these are non-portable extensions. [What I'm talking > about are things like extra keywords to 'compile-file'. The list > of such extensions will always be less than a page]. I agree that extensions to white page objects should be upward compatible, however I don't think it is possible to establish a fixed upper bound on such extensions and on the amount of documentation for each of the extensions (e.g. the VAX LISP User Guide documentation for COMPILE-FILE is three pages long). Also, site specific extensions to white page objects should be stated to be non-portable in the on-line documentation as well. In summary, I think that the most reliable way to deal with white page extended objects is to provide two distinct definitions, one for each of the two packages. Although double definitions may not appear to be necessary for the time being and may be tedious to implement, I think that they should not be precluded a priori from future implementations. Ugo Buy ARPAnet: BUY%BARTOK.DEC@DECWRL.ARPA ------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 17:13:08 EDT Received: from SUN.COM by SU-AI.ARPA with TCP; 16 Jun 86 14:02:21 PDT Received: from sun.uucp by sun.com (3.2/SMI-3.0) id AA13232; Mon, 16 Jun 86 13:56:27 PDT Received: by sun.uucp (1.1/SMI-3.0) id AA18443; Mon, 16 Jun 86 14:02:20 PDT Received: by pyramid (4.12/3.14) id AA21964; Mon, 16 Jun 86 13:54:01 pdt Date: 16 Jun 1986 13:46-PDT From: David Bein Subject: compiled-function-p To: common-lisp@su-ai.ARPA Message-Id: <519338806/bein@pyramid> Is compiled-function-p supposed to answer true for FEXPRs? What should compiled-function-p do when it comes across some evil looking thing which represents a compiled-closure and which is truly funcall'able like a regular compiled lambda body? --David p.s. Please excuse me if this question has been posted before as I am sure it must have been.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 14:51:34 EDT Received: from HUDSON.DEC.COM by SU-AI.ARPA with TCP; 16 Jun 86 11:35:14 PDT Date: 16 Jun 86 09:46:00 EST From: "BACH::BUY" Subject: packages and portability To: "common-lisp" cc: buy Reply-To: "BACH::BUY" This note relates to the discussion about packages and portability that recently took place in this digest. The major focus of the discussion has been the definition of a mutually agreed-upon package configuration to suitably support the portability of Common LISP programs across different implementations. This issue is currently under investigation by the VAX LISP development group here at DEC and we would like to present our viewpoint on the subject. Although many different suggestions have been made during this discussion, it appears that an agreement was reached on a number of points. Here are some of the points: 1. Symbols denoting Common LISP defined objects should be contained in a pure Common LISP package. These symbols are owned and external in this package and no other external symbols should appear in this package. So far, most participants in the discussion appear to have agreed that the name of this package should be LISP. 2. Symbols denoting implementation specific language extensions should be contained in a separate package. Such a package may or may not use the package containing pure Common LISP definitions. Based on these points, we would like to propose the following outline for the package configuration of future releases of VAX LISP: 1. Two packages are used: one to contain external symbols denoting pure Common LISP definitions, the other to contain external symbols denoting both pure Common LISP definitions and VAX LISP specific definitions. 2. The default environment (i.e. the USER package) uses the extended package (i.e. the package containing symbols both for pure Common LISP and for VAX LISP specific definitions). 3. The default :USE arguments to MAKE-PACKAGE and IN-PACKAGE are also the extended package. A first observation about the naming of the above mentioned packages is in order. Although some agreement was reached on LISP being the name of the pure Common LISP package, we believe that a more specific name, such as COMMON-LISP, is probably more appropriate. This is because the name COMMON-LISP better captures the desired semantics of this package, that is, to contain only the Common LISP subset of a LISP implementation. We obviously recognize the benefits of adopting a standard name for this package and are therefore open to discussion about this issue. As far as the extension package is concerned, the name VAX-LISP is our preferred choice for the same reason as the name COMMON-LISP for the previous package. Notice that this proposal contradicts the specifications in CLtL, in that no provision is made for a package named LISP (p. 181). However, we believe that the spec of the LISP package in CLtL is ambiguous and should be modified in any case, because it does not indicate whether LISP should contain just the Common LISP subset or the implementation specific extensions as well. Moreover, this scheme leaves each site using VAX LISP the freedom to use the name LISP as a nickname for either the pure Common LISP or the extended package. Although we do not advocate the use of the package name LISP in future applications because it is too generic, it is clear that existing applications may make use of it for reasons of compatibility with older versions. Aside from naming considerations, this proposal has a number of advantages in our view. First, it is relatively easy to create a pure Common LISP, portable environment by use of: (MAKE-PACKAGE 'xxx :USE 'COMMON-LISP), or (IN-PACKAGE 'xxx :USE 'COMMON-LISP) Second, it is reasonably easy to remove extended definitions from the USER environment: (UNUSE-PACKAGE 'VAX-LISP) (COMMON-LISP:USE-PACKAGE 'COMMON-LISP) Third, the default user environment (i.e. the USER package) is consistent with the default environment provided by MAKE-PACKAGE and IN-PACKAGE. Although this aspect may not be crucial in the future package structure, it is likely to enhance the ease of use and conceptual clarity of the package system, especially to new LISP users. Notice that in this case writers of portable code need to explicitly specify default values to MAKE-PACKAGE and IN-PACKAGE in order to get a pure Common LISP enviroment. This is consistent with the fact that portable code writers are likely to be required to have a more thorough knowledge of the package system than new users would. Special attention needs to be given to the problem of white page objects whose CLtL definition has been extended by an implementation. At the least, we plan to emphasize in our documentation those aspects of CLtL specified objects that have been extended by the VAX LISP implementation. In addition, we are also considering providing double implementations for objects that have undergone significant extensions in our implementation, such as FORMAT and MAKE-ARRAY. A last remark is about a variable (or a SETF'able function) bound to the package to be used as the default to MAKE-PACKAGE and IN-PACKAGE. We believe that this variable would be meaningful only if all Common LISP implementations agree not only on its syntax and semantics, but also on its initial value. For the time being, in our opinion, this variable is not required in order for people to write portable Common LISP code, although we are again willing to consider other proposals as well. Thanks, Ugo Buy ARPAnet: BUY%BARTOK@HUDSON.DEC.COM ------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 13:30:15 EDT Received: from MIT-MULTICS.ARPA by SU-AI.ARPA with TCP; 16 Jun 86 10:16:07 PDT Date: Mon, 16 Jun 86 13:15 EDT From: Hvatum@MIT-MULTICS.ARPA Subject: ' (lambda...) To: common-lisp@SU-AI.ARPA Message-ID: <860616171540.788413@MIT-MULTICS.ARPA> From: Steve Bacher (C.S.Draper Lab) Subject: Evaluating '#'(lambda ...) To: Miller.pa at XEROX.COM cc: common-lisp at SU-AI OK, let's say that you have an optimizing compiler that can take function calls containing only constants and reduce them at compile time. This is effectively done by performing the operation at compiler time and inserting the result into the compiled code, n'est-ce pas? Now, first of all the compiler needs a list of function names for which it can perform this optimization (obviously CONS and other side-effecting functions are not candidates). Let us say that EVAL is in this list. Now, the way the compiler would handle a form (EVAL 'FOO NIL) - assuming a slightly-uncommon-lisp EVAL that takes a second argument for which NIL represents the null lexical environment - is for the compiler to invoke EVAL on the constant FOO in the lexical environment specified by NIL, and plug that value into the code where the call to EVAL originally appeared. Now, let's take the case of (eval '#'(lambda (x y) ...) nil) The compiler will strip the quote mark off '#'(lambda (x y) ...) and end up with (FUNCTION (LAMBDA (X Y) ...)), true? Now, the compiler then passes this form (a LIST whose CAR is the atom FUNCTION) to EVAL and comes up with some object. *********************************************************************** The nature of this object is what I was hoping to get some answers to, since this seems unclear. It has been flamed at great length what #'FOO means where FOO is a symbol, but not what #'(LAMBDA (X Y) (FOO X Y)) means. *********************************************************************** Anyhow, as you may be able to see at this point, the implementation must ALREADY have the capability (as implemented within EVAL itself) of creating the correct kind of object to insert in the code at this point. If it does, then there must be some way of creating it (which the programmer ought to be able to get hands on). Thus, we are back to the original question. If it does not, then there is no solution to the problem in the given implementation, and the creation of null-lexical-environment closures is not possible. If EVAL of #'(lambda ...) does not return the same kind of object as compiling #'(lambda ...), which I suspect it doesn't (or what would be the difference between compilation and interpretation?), then this would produce an unsatisfactory (to my mind) result - one might as well code '(lambda ...). But a further point is that it may not even be desirable to optimize calls to EVAL. After all, such an optimization would bypass a user setting of *EVALHOOK*, for example. - SEB (at CSDL)   Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 12:47:25 EDT Received: from KIM.Berkeley.EDU by SU-AI.ARPA with TCP; 16 Jun 86 09:37:10 PDT Received: by kim.Berkeley.EDU (5.51/1.14) id AA10921; Mon, 16 Jun 86 09:37:30 PDT Received: from fimass by franz (5.5/3.14) id AA04774; Mon, 16 Jun 86 09:29:32 PDT Received: by fimass (5.5/3.14) id AA12569; Mon, 16 Jun 86 08:27:29 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8606161627.AA12569@fimass> To: ucbkim!decwrl.DEC.COM!buy%bach.DEC Cc: common-lisp@su-ai.arpa Subject: Re: packages and portability In-Reply-To: Your message of Mon, 16 Jun 86 06:58:22 PDT. <8606161358.AA08975@decwrl.DEC.COM> Date: Mon, 16 Jun 86 08:27:24 PST Our package setup is a bit different than the DEC proposal described by Ugo Buy. 1. The 'lisp' package contains only those external symbols defined by the book. [DEC would call this the common-lisp package. I could see this name change being useful for people with existing lisps who want to embed a common lisp into their lisp. If these people come forth and lobby for it, I suppose I'll agree, otherwise I think it should continue to be named lisp.] 2. The 'excl' package contains external symbols which are distinct from those of the 'lisp' package, and these symbols represent most of our extensions. We thus agree with DEC that each implementation should define an extension package with a unique name. We don't think that the lisp package symbols should be imported into this package (more on this below). 3. If extensions must be made to the functions in the book, (i.e. symbols in the lisp package) they are made only where upward compatibility is preserved, and it is clearly documented in the manual that these are non-portable extensions. [What I'm talking about are things like extra keywords to 'compile-file'. The list of such extensions will always be less than a page]. 4. The 'user' package uses both the 'lisp' and 'excl' packages in the version of common lisp we ship. It is a simple matter to unuse the 'excl' package in your init file (or globally for all users) if you want a pure common lisp. [as long as you don't use the lisp package function extensions documented it the book] 5. The default use list for the make-package and in-package functions is still the 'lisp' package. This is important, it makes the user identify non-portable code. It prevents the user from running ExCL only code in Vax-Lisp (as soon as the in-package is executed, Vax-Lisp will complain about a missing excl package). In point (2) I mentioned that the excl package doesn't contain the lisp package symbols. This permits you to go into extended mode with a use-package and out of extended mode with unuse-package (in the DEC mode, the package switch is more complex). But more important, I think that our way is 'extensible' to the future when extensions come in their own package. In fact right now we have our standard extensions in the 'excl' package, but we also have a 'foreign-functions' package, two graphics packages, a flavors package, a 'cross-reference' package, etc. In our model, you can simply use the packages you want to work in. The excl package extensions aren't treated any differently than the other extensions and I don't see why they should be. -john foderaro franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 16 Jun 86 10:09:59 EDT Received: from DECWRL.DEC.COM by SU-AI.ARPA with TCP; 16 Jun 86 06:58:37 PDT Received: from DEC-RHEA.ARPA (dec-rhea) by decwrl.DEC.COM (4.22.05/4.7.34) id AA08975; Mon, 16 Jun 86 06:58:58 pdt Message-Id: <8606161358.AA08975@decwrl.DEC.COM> Date: Monday, 16 Jun 1986 06:58:22-PDT From: buy%bach.DEC@decwrl.DEC.COM To: common-lisp@su-ai.ARPA, cl%bach.DEC@decwrl.DEC.COM, buy%bach.DEC@decwrl.DEC.COM Subject: packages and portability This note relates to the discussion about packages and portability that recently took place in this digest. The major focus of the discussion has been the definition of a mutually agreed-upon package configuration to suitably support the portability of Common LISP programs across different implementations. This issue is currently under investigation by the VAX LISP development group here at DEC and we would like to present our viewpoint on the subject. Although many different suggestions have been made during this discussion, it appears that an agreement was reached on a number of points. Here are some of the points: 1. Symbols denoting Common LISP defined objects should be contained in a pure Common LISP package. These symbols are owned and external in this package and no other external symbols should appear in this package. So far, most participants in the discussion appear to have agreed that the name of this package should be LISP. 2. Symbols denoting implementation specific language extensions should be contained in a separate package. Such a package may or may not use the package containing pure Common LISP definitions. Based on these points, we would like to propose the following outline for the package configuration of future releases of VAX LISP: 1. Two packages are used: one to contain external symbols denoting pure Common LISP definitions, the other to contain external symbols denoting both pure Common LISP definitions and VAX LISP specific definitions. 2. The default environment (i.e. the USER package) uses the extended package (i.e. the package containing symbols both for pure Common LISP and for VAX LISP specific definitions). 3. The default :USE arguments to MAKE-PACKAGE and IN-PACKAGE are also the extended package. A first observation about the naming of the above mentioned packages is in order. Although some agreement was reached on LISP being the name of the pure Common LISP package, we believe that a more specific name, such as COMMON-LISP, is probably more appropriate. This is because the name COMMON-LISP better captures the desired semantics of this package, that is, to contain only the Common LISP subset of a LISP implementation. We obviously recognize the benefits of adopting a standard name for this package and are therefore open to discussion about this issue. As far as the extension package is concerned, the name VAX-LISP is our preferred choice for the same reason as the name COMMON-LISP for the previous package. Notice that this proposal contradicts the specifications in CLtL, in that no provision is made for a package named LISP (p. 181). However, we believe that the spec of the LISP package in CLtL is ambiguous and should be modified in any case, because it does not indicate whether LISP should contain just the Common LISP subset or the implementation specific extensions as well. Moreover, this scheme leaves each site using VAX LISP the freedom to use the name LISP as a nickname for either the pure Common LISP or the extended package. Although we do not advocate the use of the package name LISP in future applications because it is too generic, it is clear that existing applications may make use of it for reasons of compatibility with older versions. Aside from naming considerations, this proposal has a number of advantages in our view. First, it is relatively easy to create a pure Common LISP, portable environment by use of: (MAKE-PACKAGE 'xxx :USE 'COMMON-LISP), or (IN-PACKAGE 'xxx :USE 'COMMON-LISP) Second, it is reasonably easy to remove extended definitions from the USER environment: (UNUSE-PACKAGE 'VAX-LISP) (COMMON-LISP:USE-PACKAGE 'COMMON-LISP) Third, the default user environment (i.e. the USER package) is consistent with the default environment provided by MAKE-PACKAGE and IN-PACKAGE. Although this aspect may not be crucial in the future package structure, it is likely to enhance the ease of use and conceptual clarity of the package system, especially to new LISP users. Notice that in this case writers of portable code need to explicitly specify default values to MAKE-PACKAGE and IN-PACKAGE in order to get a pure Common LISP enviroment. This is consistent with the fact that portable code writers are likely to be required to have a more thorough knowledge of the package system than new users would. Special attention needs to be given to the problem of white page objects whose CLtL definition has been extended by an implementation. At the least, we plan to emphasize in our documentation those aspects of CLtL specified objects that have been extended by the VAX LISP implementation. In addition, we are also considering providing double implementations for objects that have undergone significant extensions in our implementation, such as FORMAT and MAKE-ARRAY. A last remark is about a variable (or a SETF'able function) bound to the package to be used as the default to MAKE-PACKAGE and IN-PACKAGE. We believe that this variable would be meaningful only if all Common LISP implementations agree not only on its syntax and semantics, but also on its initial value. For the time being, in our opinion, this variable is not required in order for people to write portable Common LISP code, although we are again willing to consider other proposals as well. Thanks, Ugo Buy ARPAnet: BUY%BARTOK@HUDSON.DEC.COM  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 12 Jun 86 14:41:12 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 12 Jun 86 11:30:29 PDT Received: ID ; Thu 12 Jun 86 14:30:27-EDT Date: Thu 12 Jun 86 14:30:25-EDT From: Bill.Scherlis@C.CS.CMU.EDU Subject: Lisp Conference, addendum To: common-lisp@SU-AI.ARPA Message-ID: <12214277536.31.SCHERLIS@C.CS.CMU.EDU> If there is anybody on this mailing list who is not an ACM member and who would like to receive registration information for the Lisp and Functional Programming Conference, please send a note to Marce Zaragoza here at CMU (Zaragoza@C.CS.CMU.EDU). Don't forget to include your physical mailing address. Bill Scherlis -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 11 Jun 86 22:09:22 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 11 Jun 86 18:55:29 PDT Received: ID ; Wed 11 Jun 86 21:55:29-EDT Date: Wed 11 Jun 86 21:55:26-EDT From: Bill.Scherlis@C.CS.CMU.EDU Subject: 1986 ACM Lisp and Fnl Pgg Conference To: common-lisp@SU-AI.ARPA Message-ID: <12214096407.8.SCHERLIS@C.CS.CMU.EDU> Advance Program The 1986 ACM Conference on LISP AND FUNCTIONAL PROGRAMMING MIT, August 4-6, 1986. Monday, August 4, 1986 Session 1. 9:00am -- 10:30am. Session chair: William L. Scherlis (CMU) Laws in Miranda. Simon Thompson (University of Kent at Canterbury) A Simple Applicative Language: mini-ML. D. Clement (SEMA, Sophia-Antipolis), J. Despeyroux, T. Despeyroux, G. Kahn (INRIA, Sophia-Antipolis) Integrating Functional and Imperative Programming. David K. Gifford, John M. Lucassen (Massachusetts Institute of Technology) Session 2. 10:55am -- 12:35pm. Session chair: Daniel Weinreb (Symbolics, Inc.) Experience with an Uncommon LISP. Cyril N. Alberga, Chris Bosman-Clark, Martin Mikelsons, Mary S. Van Deusen (IBM T. J. Watson Research Center), Julian Padget (University of Bath) Desiderata for the Standardization of LISP. Julian Padget (University of Bath), et al. Design of an Optimizing, Dynamically Retargetable Compiler for Common Lisp. Rodney A. Brooks (MIT, Lucid, Inc.), David B. Posner, James L. McDonald, Jon L. White, Eric Benson, Richard P. Gabriel (Lucid, Inc.) The Implementation of PC Scheme. David H. Bartley, John C. Jensen (Texas Instruments Incorporated) Session 3. 2:00pm -- 3:40pm. Session chair: John H. Williams (IBM Research) Code Generation Techniques for Functional Languages. J. Fairbairn, S. C. Wray (University of Cambridge) An Architecture for Mostly Functional Languages. Tom Knight (Symbolics, Inc. and Massachusetts Institute of Technology) Mechanisms for Efficient Multiprocessor Combinator Reduction. M. Castan, M. -H. Durand, G. Durrieu, B. Lecussan, M. Lemaitre (ONERA-CERT) The CURRY Chip. John D. Ramsdell (The MITRE Corporation) Session 4. 4:05pm -- 5:45pm. Session chair: Mitchell Wand (Northeastern University) Variations on Strictness Analysis. Adrienne Bloss, Paul Hudak (Yale University) Expansion-Passing Style: Beyond Conventional Macros. R. Kent Dybvig, Daniel P. Friedman, Christopher T. Haynes (Indiana University) Hygienic Macro Expansion. Eugene Kohlbecker, Daniel P. Friedman, Matthias Felleisen, Bruce Duba (Indiana University) Exact Real Arithmetic: A Case Study in Higher Order Programming. Hans-J. Boehm, Robert Cartwright, Mark Riggle (Rice University) Michael J. O'Donnell (University of Chicago) Banquet. INVITED TALK: The History of Lisp John McCarthy (Stanford University) Tuesday, August 5, 1986. Session 5. 9:00am -- 10:40am. Session chair: Rodney Brooks (MIT) Reconfigurable, Retargetable Bignums: A Case Study in Efficient, Portable Lisp System Building: Jon L. White (Lucid, Inc.) LISP on a Reduced-Instruction-Set-Processor. Peter Steenkiste, John Hennessy (Stanford University) Partitioning Parallel Programs for Macro-dataflow. Vivek Sarkar, John Hennessy (Stanford University) NORMA: A Graph Reduction Processor. Mark Scheevel (Burroughs Corporation) Session 6. 11:05am -- 12:20pm. Session chair: Mark Wegman (IBM Research) The Four-Stroke Reduction Engine. Chris Clack, Simon L. Peyton Jones (University College London) On the Use of LISP in Implementing Denotational Semantics. Peter Lee, Uwe Pleban (The University of Michigan) Semantics Directed Compiling for Functional Languages. Hanne R. Nielson, Flemming Nielson (Aalborg University Center) Session 7. 2:00pm -- 3:15pm. Session chair: Gilles Kahn (INRIA) Connection Graphs. Alan Bawden (Massachusetts Institute of Technology) Implementing Functional Languages in the Categorical Abstract Machine. Michel Mauny, Ascander Suarez (INRIA) Connection Machine LISP: Fine-Grained Parallel Symbolic Processing. Guy L. Steele, Jr., W. Daniel Hillis (Thinking Machines Corporation) Session 8. 3:45pm -- 5:45pm. Panel chair: L. Peter Deutsch (Xerox PARC and CodeSmith Technology, Inc.) PANEL: Object Oriented Programming in Lisp. Wednesday, August 6, 1986. Session 9. 9:00am -- 10:40am. Session chair: David MacQueen (Bell Laboratories) The Mystery of the Tower Revealed: A Non-Reflective Description of the Reflective Tower. Mitchell Wand (Northeastern University), Daniel P. Friedman (Indiana University) A Type-Inference Approach to Reduction Properties and Semantics of Polymorphic Expressions. John C. Mitchell (AT&T Bell Laboratories) Equations, Sets, and Reduction Semantics for Functional and Logic Programming. Bharat Jayaraman (University of North Carolina at Chapel Hill) Towards a Semantic Theory for Equational Programming Languages. Satish Thatte (The University of Michigan) Session 10. 11:05am -- 12:20pm. Session chair: Richard P. Gabriel (Lucid, Inc) A Protocol for Distributed Reference Counting. Claus-Werner Lermen, Dieter Maurer (Universitat des Saarlandes) A Semantic Model of Reference Counting and its Abstraction. Paul Hudak (Yale University) Distributed Copying Garbage Collection. Martin Rudalics (Institut fur Mathematik) End of conference. ================================================================ CONFERENCE CHAIR: Richard P. Gabriel (Lucid, Inc) PROGRAM CHAIRS: William L. Scherlis (CMU), John H. Williams (IBM) LOCAL ARRANGEMENTS CHAIR: Robert Halstead (MIT) PROGRAM COMMITTEE: Rodney Brooks, MIT L. Peter Deutsch, Xerox PARC and CodeSmith Technology, Inc. Gilles Kahn, INRIA David MacQueen, Bell Laboratories J. Alan Robinson, Syracuse University William L. Scherlis, CMU David Turner, Kent University Mitchell Wand, Northeastern University Mark Wegman, IBM Research Daniel Weinreb, Symbolics John H. Williams, IBM Research ================================================================ Conference brochures are now being mailed by ACM. The early registration deadline is July 7. -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 11 Jun 86 12:21:17 EDT Received: from UTAH-CS.ARPA by SU-AI.ARPA with TCP; 11 Jun 86 09:09:22 PDT Received: by utah-cs.ARPA (5.31/4.40.2) id AA00536; Wed, 11 Jun 86 10:10:46 MDT Received: by utah-orion.ARPA (5.31/4.40.2) id AA03641; Wed, 11 Jun 86 10:10:40 MDT Date: Wed, 11 Jun 86 10:10:40 MDT From: shebs%utah-orion@utah-cs.arpa (Stanley Shebs) Message-Id: <8606111610.AA03641@utah-orion.ARPA> Newsgroups: fa.common-lisp Subject: Re: Some questions Summary: Expires: References: <860610-162408-2271@Xerox> Sender: Reply-To: shebs@utah-orion.UUCP (Stanley Shebs) Followup-To: Distribution: Organization: University of Utah CS Dept Keywords: Apparently-To: common-lisp@su-ai.arpa In article <860610-162408-2271@Xerox> Miller.pa@Xerox.COM writes: > > Is there a need for flexibility in the mapping from print-names to >symbols other than to simulate (poorly) flexibility in mapping symbols >to values? You are correct that packages buy you scoping of symbols >themeselves. My question is: what's this used for other than scoping >values? I personally don't see the purpose of packages as being to provide "flexibility" in mapping symbols to values or to simulate such flexibility. If you ignore the kinkier package operations (which nobody agrees about the right behavior of anyway), then any program is equivalent to one in which all the package references are explicit references to internal symbols - as in (lisp-setq user-x (lisp-cdr sys-*luser-features*)) Packages are a typing/reading aid, to avoid having to qualify every symbol in the world, and to avoid the bad old days when qualification of symbols was haphazard at best. One should note that there is *no way* to create an interned symbol that is not accessible from anywhere. Locales and lexical environments and objects are for encapsulation, and therefore support and even encourage creation of things that are only locally visible. The "ultimate" Lisp system has a place both for locales and packages, although it might be a bit complicated... stan  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 11 Jun 86 12:07:49 EDT Received: from YALE-BULLDOG.ARPA by SU-AI.ARPA with TCP; 11 Jun 86 08:57:59 PDT Received: by Yale-Bulldog.YALE.ARPA; 11 Jun 86 11:34:52 EDT (Wed) Date: 11 Jun 86 11:34:52 EDT (Wed) From: James F Philbin Message-Id: <8606111534.AA03577@Yale-Bulldog.YALE.ARPA> Subject: Locales To: Cc: Common-lisp@SU-AI.ARPA BTW. The original T did have a single global property space, but I believe that the new T (T3) has fixed that. Perhaps one of the T folk would care to comment (I'm on shaky ground here) In T3 symbols do not have associated property lists and thus there is no property space. Symbols are used only as identifiers and are pure, i.e. they cannot be side-effected. In place of properties T provides Tables which are associations between keys (possibly symbols) and values. In my experience it is trivial to convert code which uses property lists to code using tables. It is also trivial to implement GET and PUT using tables.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 11 Jun 86 03:10:10 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 11 Jun 86 00:01:54 PDT Received: by su-shasta.arpa; Wed, 11 Jun 86 00:00:19 PDT Received: by nttlab.ntt.junet (4.12/4.7JC-7) with TCP id AA25440; Wed, 11 Jun 86 14:27:18 jst Date: Wed, 11 Jun 86 14:27:18 jst From: nttlab!umemura@su-shasta.arpa (Kyoji UMEMURA) Message-Id: <8606110527.AA25440@nttlab.ntt.junet> To: Shasta!Common-lisp@su-ai.arpa, Shasta!Fahlman%C.CS.CMU.EDU@su-score.arpa Subject: Kanji, Foreign characters >>Do you really want the name of some function to be >>"FOO" in the Bodini font, so you have in that font or you can't access >>that function? True there are some plain vanilla characters you hardly >>ever want in the PNAME of an identifier/symbol, but in general you should >>be able to do INTERN or STRING2ID or whatever on an arbitrary string of >>whatever you consider legal characters. Any information that isn't >>reasonable for coercing into a PNAME shouldn't be considered a normal part >>of strings either. Use extended strings (fat strings or whatever) for >>all that other cruft. >The Japanese person who wants to include kanji etc. in PNAMEs of symbols >(names of functions and variables mostly), I agree. Get rid of fonts >and bit attributes in PNAMEs, but allow an implementation to have fat >characters in vanilla strings and PNAMEs. -- One problem, how do you >exchange programs between ASCII-only implementations and >KANJI implementations? >I don't know the answer. It is necessary to convert a fat string to an unique SIMPLE-STRING and the simple string can be converted to the original fat string. It is similar case CHAR-INT & INT-CHAR. The character code may be implementation dependent (for example EBCDIC or ASCII). However, the existence of "one to one mapping" between two data types has a very important meaning for programmer. The converting method between a fat string and a simple string might be implementation dependent. However CLtL should define the function names for the conversion. If these functions are defined, the converted simple string can be used in "foreigner"'s PNAME. The "foreigner's" documentation string can be stored in converted simple one without any extra space. A part from foreign characters, a documentation with fonts can be realized with the same way. It will be useful for all CL users. These functions might be identity function in some Japanese implementations, where STANDARD-CHAR includes all KANJI character. Another implementation might use escape character for converting a fat string. Another implementation use special codes for saving space. Whatever conversion might be, a portable code can be developed if the "one to one mapping" is defined clearly. --- kyoji Umemura  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 19:44:40 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 10 Jun 86 16:29:15 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 10 JUN 86 16:24:08 PDT Date: 10 Jun 86 16:23 PDT From: Miller.pa@Xerox.COM Subject: Re: Some questions In-reply-to: Daniel L. Weinreb 's message of Tue, 10 Jun 86 11:45 EDT To: DLW@QUABBIN.SCRC.Symbolics.COM cc: miller.pa@Xerox.COM, Common-lisp@SU-AI.ARPA Message-ID: <860610-162408-2271@Xerox> Date: Tue, 10 Jun 86 11:45 EDT From: Daniel L. Weinreb Date: 29 May 86 21:21 PDT From: miller.pa@Xerox.COM How happy are you with packages? In particular, for those familiar T's reified lexical environments (aka LOCALEs), can you think of any reason for preferring packages? Can T's LOCALEs be added to the language in an upwards compatible way? I'd like to point out, again, that packages get you thinks that locales don't (if I understand locales properly -- someone correct me if I'm wrong). Packages provide name scoping for symbols themselves, not for values. Therefore, if symbolics are being used for virtue of their identity rather than their value, packages provide name scoping and locales do not. For example, suppose one subsystem uses the A:FOO property of symbols, and another subsystem uses the B:FOO property. I actually think your example emphasizes my point. Traditional (old) lisps had a very simple mapping from print-names to symbols (one to one), and awkward control over the mapping from symbols to values (dynamic scoping). As we started to compose ever larger programs, it became obvious that we needed better control and flexibility somewhere in here. To achieve more flexibility mapping print-names to symbols, we invented packages. To achieve flexibility and control mapping symbols to values, we invented lexical closures, reified lexical environments (locales), separate name spaces (function / "value" / type / property / etc..), and object oriented programming. Your example shows a need for scoping in property value space. You are simulating this in a global property value space by using the "name scoping" of packages. The reason I find this unsatisfying is that this scoping happens at read time, and the reader is not part of the computational model of the language. In the computational model of the language you are still stuck with a global property space. Is there a need for flexibility in the mapping from print-names to symbols other than to simulate (poorly) flexibility in mapping symbols to values? You are correct that packages buy you scoping of symbols themeselves. My question is: what's this used for other than scoping values? BTW. The original T did have a single global property space, but I believe that the new T (T3) has fixed that. Perhaps one of the T folk would care to comment (I'm on shaky ground here) MarkM  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 18:08:18 EDT Received: from IBM.COM by SU-AI.ARPA with TCP; 10 Jun 86 14:38:18 PDT Date: 10 June 1986, 16:31:56 CT From: "Brandon P. Cross" To: Common-LISP@SU-AI.ARPA Has anyone thought about the proposed meeting during AAAI? Any thoughts? Brandon  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 17:23:49 EDT Received: from IBM.COM by SU-AI.ARPA with TCP; 10 Jun 86 14:12:53 PDT Date: 10 June 1986, 15:54:07 CT From: "Brandon P. Cross" To: common-lisp@SU-AI.ARPA While making plans to attend AAAI in August I remembered that there was plans to hold Common LISP meetings before, during or after the conference. Any thoughts? Brandon Cross BRANDON@IBM.COM  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 12:10:05 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 10 Jun 86 08:58:18 PDT Received: from CHICOPEE.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 18478; Tue 10-Jun-86 11:43:58 EDT Date: Tue, 10 Jun 86 11:45 EDT From: Daniel L. Weinreb Subject: Some questions To: miller.pa@XEROX.ARPA, Common-lisp@SU-AI.ARPA In-Reply-To: <860529-212115-1390@Xerox> Message-ID: <860610114524.0.DLW@CHICOPEE.SCRC.Symbolics.COM> Date: 29 May 86 21:21 PDT From: miller.pa@Xerox.COM How happy are you with packages? In particular, for those familiar T's reified lexical environments (aka LOCALEs), can you think of any reason for preferring packages? Can T's LOCALEs be added to the language in an upwards compatible way? I'd like to point out, again, that packages get you thinks that locales don't (if I understand locales properly -- someone correct me if I'm wrong). Packages provide name scoping for symbols themselves, not for values. Therefore, if symbolics are being used for virtue of their identity rather than their value, packages provide name scoping and locales do not. For example, suppose one subsystem uses the A:FOO property of symbols, and another subsystem uses the B:FOO property.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 11:41:31 EDT Received: from KIM.Berkeley.EDU by SU-AI.ARPA with TCP; 10 Jun 86 08:30:08 PDT Received: by kim.Berkeley.EDU (5.51/1.14) id AA15343; Tue, 10 Jun 86 08:30:13 PDT Received: from fimass by franz (5.5/3.14) id AA06427; Tue, 10 Jun 86 07:37:19 PDT Received: by fimass (5.5/3.14) id AA09293; Tue, 10 Jun 86 06:36:01 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8606101436.AA09293@fimass> To: David C. Plummer Cc: common-lisp@su-ai.arpa Subject: Re: tail recursion optimization In-Reply-To: Your message of Mon, 09 Jun 86 21:52:00 EDT. <860609215232.9.DCP@FIREBIRD.SCRC.Symbolics.COM> Date: Tue, 10 Jun 86 06:35:55 PST >> From: David C. Plummer >> Inline is orthogonal to tail recursion optimization. >> (defun foo (a b) (declare (inline foo)) (foo a (+ a b))) >> is equivalent to >> (defun foo (a b) (+a (+ a (+ a (+ a ...))))) >> and not >> (defun foo (a b) (loop (psetq a a b (+ a b)))) I think that the wording of the manual is ambiguous enough that both interpretations are correct. The manual only talks about the called function being 'integrated into' the calling function. Your foo function simply adds its arguments and then calls itself with the value of the first argument and the sum. This is equivalent, in the operations performed to (defun foo (a b) (loop (psetq a a b (+ a b)))) rather than this (defun foo (a b) (+a (+ a (+ a (+ a ...))))) which will never end up doing even one addition. Thus the second interpretation of 'inline foo' is, in my opinion, a better job of integrating the definition of foo inline. The first interpretation shows the danger of assuming that inline expansion should be done exactly like macroexpansion. -jkf  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 11:23:46 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 08:08:04 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2522; 10 Jun 86 11:07:14-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 23054; Tue 10-Jun-86 11:07:17-EDT Date: Tue, 10 Jun 86 11:07 EST From: mike@a To: Miller.pa@Xerox.COM Subject: Correction to reply to: Re: Closure, Null Lexical Env. Cc: common-lisp@SU-AI.ARPA Date: 9 Jun 86 17:35 PDT From: Miller.pa@Xerox.COM Pleas excuse my answer involving misreading of: "(eval '#'(lambda (x y) ...) nil)" does seem to be the best (only?) way to express "closure over null lexical environment" in common-lisp. There is no reason for the compiler to not compile this. which I read as (eval #'(lambda...)). Most of the comments I made are hash w.r.t. this. Sorry,... got to get new eyeglasses i guess. There is still no lexical environment passed to eval. My comments about use of null lexical environments are still valid I think. I'd still like to know what useful work can be accomplished using a closure in a null lexical environment. Likely I've missed some of the CL mail about this due to mail/fs problems. ...mike  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 11:01:22 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 10 Jun 86 07:51:14 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2520; 10 Jun 86 10:49:55-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 23052; Tue 10-Jun-86 10:49:47-EDT Date: Tue, 10 Jun 86 10:49 EST Sender: mike@a To: Miller.pa@Xerox.arpa From: mike%Gold-Hill-Acorn@mit-live-oak.arpa Subject: Re: Closure, Null Lexical Env. Cc: common-lisp@SU-AI.ARPA Date: 9 Jun 86 17:35 PDT From: Miller.pa@Xerox.COM "(eval '#'(lambda (x y) ...) nil)" does seem to be the best (only?) way to express "closure over null lexical environment" in common-lisp. NOT COMMON LISP. Eval does not take a second arg which is an environment. (pg 323) furthermore, eval *always* uses the null lexical environment. Your suggestion implies that eval should not be a function, but rather, a special-form or macro since the following are not equivalent: (let ((x #'(lambda (x y) ...))) (eval x)) (eval #'(lambda (x y) ...)) which is surprising since (let ((x '(lambda (x y) ...))) (eval x)) (eval '(lambda (x y) ...)) are equivalent. In CL, eval is a function. In (eval
), eval doesn't even get called until has been evaluated in the current environment. It is only if evaluates to a list that issues of what environment eval uses can be relevant. i.e., (flet ((foobar (x) (eval x))) .... (foobar #'(lambda (x y) ...))) is entirely equivalent to (eval #'(lambda (x y) ...)). If you want a lex-closure which has no lexical environment, then free non-special vars within it should never be touched or they'd produce errors (unbound lexical var I guess). What do you want to use this device for? I can only think of one scenario, which is as an "end of scope" device, a la the language Pebble. Pebble uses big moby function objects (read lexical closures) as a modularization tool, similar in spirit to the use of environments in scheme, and intended to solve "package" problems via function composition. To detect errors in this modularization, they provide a scope-ender construct, so that you can know that your function isn't getting anything from outer lexical scopes. (personally, I don't like Pebble, but the concept seems to apply here). Is there any other reason that you could possibly want this close-in-null-lexical-environment ??? mike beckerle Gold Hill Computers mike%gold-hill-acorn@mit-live-oak.arpa  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 10:39:36 EDT Received: from GODOT.THINK.COM by SU-AI.ARPA with TCP; 10 Jun 86 07:29:03 PDT Received: from GUIDO.THINK.COM by Godot.Think.COM; Tue, 10 Jun 86 10:28:52 edt Date: Tue, 10 Jun 86 10:29 EDT From: Guy Steele Subject: Re: tail recursion optimization To: DCP@QUABBIN.SCRC.Symbolics.COM, common-lisp@SU-AI.ARPA Cc: gls@AQUINAS In-Reply-To: <860609215232.9.DCP@FIREBIRD.SCRC.Symbolics.COM> Message-Id: <860610102938.3.GLS@GUIDO.THINK.COM> Date: Mon, 9 Jun 86 21:52 EDT From: David C. Plummer Date: Tue, 03 Jun 86 06:27:53 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) To answer earl's question: yes, a self-tail-recursive call is made with a jump to the beginning of the function code. I agree with Rob, this is an important optimization and 99% of the time is it precisely what you want. I'd suggest that the Common Lisp Standard state that within the defun of foo, there is an impliclit (declare (inline foo)) and if the user plans on redefining foo while executing foo, it is his burden to (declare (notinline foo)). Inline is orthogonal to tail recursion optimization. (defun foo (a b) (declare (inline foo)) (foo a (+ a b))) is equivalent to (defun foo (a b) (+a (+ a (+ a (+ a ...))))) and not (defun foo (a b) (loop (psetq a a b (+ a b)))) But the latter appears to be a valid (and cleverly optimized) implementation of the former. --Guy  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 10 Jun 86 07:47:45 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 10 Jun 86 04:39:37 PDT Received: ID ; Tue 10 Jun 86 07:39:21-EDT Date: Tue, 10 Jun 1986 07:39 EDT Message-ID: From: Rob MacLachlan To: "David C. Plummer" Cc: common-lisp@SU-AI.ARPA Subject: tail recursion optimization Date: Monday, 9 June 1986 21:52-EDT From: David C. Plummer To: common-lisp at SU-AI.ARPA Re: tail recursion optimization Inline is orthogonal to tail recursion optimization. (defun foo (a b) (declare (inline foo)) (foo a (+ a b))) is equivalent to (defun foo (a b) (+a (+ a (+ a (+ a ...))))) and not (defun foo (a b) (loop (psetq a a b (+ a b)))) Yes, but I didn't say that the tail-recursive call should be considered INLINE; I just said that it could be forced to be a full function call by declaring it NOTINLINE. NOTINLINE isn't the opposite of INLINE, since NOTINLINE affects things which were never declared INLINE and is mandatory rather than advisory. NOTINLINE is an important declaration in Common Lisp since it provides a way to tell the compiler not to make any assumptions about the definition of a function. I don't know to what extent current compilers do this, but I think that when a function is declared NOTINLINE, is should be called exactly as given, regardless of what the function is. This inhibits any optimization of the call. It would be incorrect to compile (+ (+ 3 4) 5) as 12 or even as (+ 3 4 5) if + was declared NOTINLINE. Making the function be lexically defined within the body of DEFUN is a clean solution, but it does have some problems. Every implementation would be required to make defun close over the definition at DEFUN time. This would require every implementation to be changed, and would put a serious strain on some. In particular, in the current PERQ Spice Lisp implementation, the only way to do this would be to make every function a lexical closure. It also have the problem that this early binding of the name becomes a mandatory part of the semantics, and thus couldn't be inhibited by a compiler switch. Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 22:00:33 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 9 Jun 86 18:53:50 PDT Received: from FIREBIRD.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 18213; Mon 9-Jun-86 21:53:34 EDT Date: Mon, 9 Jun 86 21:52 EDT From: David C. Plummer Subject: Re: tail recursion optimization To: common-lisp@SU-AI.ARPA In-Reply-To: <8606031427.AA01279@fimass> Message-ID: <860609215232.9.DCP@FIREBIRD.SCRC.Symbolics.COM> Date: Tue, 03 Jun 86 06:27:53 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) To answer earl's question: yes, a self-tail-recursive call is made with a jump to the beginning of the function code. I agree with Rob, this is an important optimization and 99% of the time is it precisely what you want. I'd suggest that the Common Lisp Standard state that within the defun of foo, there is an impliclit (declare (inline foo)) and if the user plans on redefining foo while executing foo, it is his burden to (declare (notinline foo)). Inline is orthogonal to tail recursion optimization. (defun foo (a b) (declare (inline foo)) (foo a (+ a b))) is equivalent to (defun foo (a b) (+a (+ a (+ a (+ a ...))))) and not (defun foo (a b) (loop (psetq a a b (+ a b))))  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 20:49:06 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 9 Jun 86 17:36:00 PDT Received: from Chardonnay.ms by ArpaGateway.ms ; 09 JUN 86 17:35:54 PDT Date: 9 Jun 86 17:35 PDT From: Miller.pa@Xerox.COM Subject: Re: Closure, Null Lexical Env. In-reply-to: Hvatum@MIT-MULTICS.ARPA's message of Mon, 9 Jun 86 00:35 EDT To: Hvatum@MIT-MULTICS.ARPA cc: common-lisp@SU-AI.ARPA Message-ID: <860609-173554-1352@Xerox> "(eval '#'(lambda (x y) ...) nil)" does seem to be the best (only?) way to express "closure over null lexical environment" in common-lisp. There is no reason for the compiler to not compile this. Both the arguments to "eval" are constant, and there are no environmental dependencies. This can be compiled for the same reason that "(+ 3 4)" can be turned into a "7" at compile time. The constant list "(FUNCTION (LAMBDA ...))" would not have to be preserved in the compiled code for the same reason that the "3" doesn't. I realize that probably no existing compiler compiles this, but it would seem an easy change to make. As easy as adding some other language construct to create such closures, but without making the language definition any bloody larger. Any program which assumed this enhancement would continue to work on current implementations (although slowly). MarkM  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 14:59:05 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 9 Jun 86 11:47:17 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 17313; Mon 9-Jun-86 09:37:17 EDT Date: Mon, 9 Jun 86 09:36 EDT From: Kent M Pitman Subject: Why aren't char bits portable? To: COMMON-LISP@SU-AI.ARPA cc: REM%IMSSS@SU-AI.ARPA Supersedes: <860603211152.2.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Message-ID: <860609093644.1.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> [Retrying mail returned due to network problems; apologies if anyone got duplicate copies.] There's nothing unportable about char bits. They just have to be used correctly, but I think the necessary primitives are provided. eg, MACSYMA has an editor which uses one-character commands and extended commands. In some implementations, users have to use more commands than in others, because in implementations which allow char bits, MACSYMA will install commands on those characters. Although this somewhat affects the behavior of the program between implementations, I don't think the effect is likely to be serious at the program level primarily because (with the exception of AI programs that manipulate robot arms which in turn want to enter key sequences on the terminal), programs mostly can't tell the difference between not having those characters and having those characters be available but just never executed. You have to be careful not to use #\Control-..., of course. That's not portable. But you can do, for example: (DEFVAR *MY-COMMAND-TABLE* (MAKE-HASH-TABLE)) (DEFVAR *COMMAND-TABLE* NIL) (DEFUN INSTALL-COMMAND-ON-KEY (FUNCTION CHAR &REST BIT-NAMES) (DO ((CH CHAR (SET-CHAR-BIT CH (CAR B) T)) (B BIT-NAMES (CDR B))) ((NOT B) (SETF (GETHASH CH *MY-COMMAND-TABLE*) FUNCTION)))) (DEFUN GET-COMMAND-ON-KEY (KEY) (GETHASH KEY *MY-COMMAND-TABLE*)) (DEFUN PROMPT (X) (UNLESS (LISTEN) ;Tolerate line-at-a-time systems (FORMAT T "~&~A: " X))) (DEFUN READ-KEY-COMMAND () (PROMPT "Command") (READ-CHAR)) (DEFVAR *LAST-NUMBER* 0) (DEFUN READ-NUMBER () (PROMPT "Value") (SETQ *LAST-NUMBER* (READ))) (DEFUN CALCULATOR (&OPTIONAL (VAL 0) (*COMMAND-TABLE* *MY-COMMAND-TABLE*)) (DO ((CH (READ-KEY-COMMAND) (READ-KEY-COMMAND))) ((MEMBER CH '(#\X #\x))) (LET ((FN (GET-COMMAND-ON-KEY CH))) (COND (FN (PRINT (SETQ VAL (FUNCALL FN VAL)))) ((CHAR= CH #\Newline) NIL) ;Tolerate line-at-a-time systems (T (FORMAT T "~&~@:C is not a defined command.~%" CH)))))) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) X 0) #\0) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) (+ X (READ-NUMBER))) #\+) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) (- X (READ-NUMBER))) #\-) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) (* X (READ-NUMBER))) #\*) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) X) #\=) (UNLESS (ZEROP CHAR-CONTROL-BIT) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) (+ X *LAST-NUMBER*)) #\+ :CONTROL) (INSTALL-COMMAND-ON-KEY #'(LAMBDA (X) (- X)) #\- :CONTROL)) (CALCULATOR) Command: = 0 Command: + Value: 3 3 Command: * Value: 4 12 Command: Control-- -12 Of course, in some implementations, it won't be possible to type Control--, so the user of that interpretation will never observe any difference in behavior other than the inaccessibility of certain commands. And there are no functionalities in the above calculator which are on Control-anything which are not otherwise accessible, so a portable program would not be "broken" by their non-accessibility. In fact, the above program could use MAPHASH to portably create self-documentation that worked correctly for the appropriate implementation even if implementations varied. Note that the thing which makes this reasonable (where package problems of a similar nature are not) is that the funny bits are a property of a user interface, not of a program interface. I'm certainly willing to believe that lots of people won't use this feature, but I do think it's meaningful. I'm also not 100% convinced that there are no pitfalls to the style-of-use I'm proposing above, but I've not run across any. If anyone has experience with using the above strategy or a similar one and running into unforseen problems I've not mentioned, I'd be interested to hear about them. If people agree that the above program is neither faulty nor unportable, then perhaps it or something like it could be included in the next manual draft so that readers could see an example of the careful programming style you have to stick to in order to make this stuff work. I admit it's not the sort of strategy that leaps to mind the first time you see the available primitives. The line-at-a-time issue that comes up implicitly in that example is much more problemsome to me. I wish someone would suggest how we could deal more satisfactorily with that. By the way, I see no purpose in fonted char-bits. Bits are something that it seems to me are associated with an input operation and fonts are something associated with an output operation. I'm not sure what to draw from this, but my impression is that it is nearly always the case that either the bits are 0 or the font is 0, even in systems which use both bits and fonts. I just can't imagine treating Bold Control-A different than Italic Control-A.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 14:58:26 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 9 Jun 86 11:46:34 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 16500; Fri 6-Jun-86 10:12:56 EDT Date: Fri, 6 Jun 86 10:13 EDT From: Kent M Pitman Subject: tail recursion optimization To: RAM@CMU-CS-C.ARPA cc: Common-Lisp@SU-AI.ARPA, KMP@SCRC-STONY-BROOK.ARPA References: <860603033423.2.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM>, Message-ID: <860606101309.7.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> [Sorry about the delay in this reply. This mail was only just returned due to network problems. Retrying; apologies if anyone gets duplicates.] Date: Tue, 3 Jun 86 03:34 EDT From: Kent M Pitman Subject: tail recursion optimization To: ram@CMU-CS-C.ARPA cc: kmp@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860603033423.2.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> My point is that this isn't a beauty contest and your speech about "what function calling means to me" is not going to carry any weight. Did you read the passage I cited? On p59, in the named functions description: "If a symbol appears as the first element of a function- call form, then it refers to the definition established by the innermost FLET or LABELS construct that textually contains the reference or to the global definition (if any) if there is no such containing construct". The definition of DEFUN (p67) says (in a combination of text and prose that (DEFUN name lambda-list {declaration | doc-string}* {form}*) makes the global definition of NAME (ie, the thing referred to above) be (LAMBDA lambda-list {declaration | doc-string}* (BLOCK name {form}*)) I note in passing that were this adequate to assure closure of form over name, then FLET and LABELS would be the same. The whole point of LABELS is to address this {mis}feature of the spec which does not provide for the closure of a named function over its own function cell. Anyway, getting back to my argument, p95 says I can setf the symbol-function of a symbol. (p90 defines that the symbol-function of a symbol is what holds the global definition of a function.) p438 assures me that compiling my program "should produce an equivalent but more efficient program". Given all this, I take the wording on p59 to say, effectively: "Whether running interpreted or compiled, a piece of code which was denoted at the source level by a list the car of which was a function name not mentioned in any bounding FLET or LABELS will have behavior equivalent to that which would be obtained by applying the contents of the named function's symbol-value cell to the evaluated argument." I'm not interested in debating whether this is a productive or efficient interpretation. I happen to personally prefer languages which offer the meaning you describe. However, Common Lisp is not such a language. I would support a move to change this aspect of the language in a future spec, but I am adamant that the meaning of a language must come (insofar as it is possible) from the wording of its specification. In this case, the specification seems quite clear. If you beleve that this is not a correct interpretation, please cite supporting passages. You suggested that the wording is ambiguous but offered no supporting evidence. All of your arguments were based on what's efficient, not what's specified. At language design time, you worry about efficiency implications. Once you've put the word on paper, however, you can only worry about the implication of the wording or about how you're going to change things in a later version.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 12:49:42 EDT Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 9 Jun 86 09:34:39 PDT Received: from utokyo-relay by csnet-relay.csnet id ab11788; 9 Jun 86 12:29 EDT Received: by u-tokyo.junet (4.12/4.9J-1[JUNET-CSNET]) id AA04240; Mon, 9 Jun 86 19:30:38+0900 Received: by ccut.u-tokyo.junet (4.12/6.1Junet) id AA04960; Mon, 9 Jun 86 19:22:32+0900 Date: Mon, 9 Jun 86 19:22:32+0900 From: Masayuki Ida Message-Id: <8606091022.AA04960@ccut.u-tokyo.junet> To: common-lisp@SU-AI.ARPA, ida@UTOKYO-RELAY.CSNET Subject: subset This is in reply to mike@gold-hill-acorn My basic understanding of subset issue is as follows; 1) "subset" is a subset of the "fullset". a subset CommonLisp should NOT be a Commonlisp-like Lisp dialect. 2) In general, the language spec which have been discussed with many excellent implementors/researchers has a tendency to graw, or it will be larger. I welcome this tendency but there should not exist the field behind the standardization. There should be two levels or so of language spec for Common Lisp. 3) Needs for a common lisp on small computers will be much larger. The application on small computers are not only educational one. As to Mike's discussion on scoping, I want to have a strict conformance to the full or superset specification. My experimental Lisp for 86/286 had also a "local lexical and complete global" principle. I found it is not good to push my scoping principle As to CommonLoops or object oriented facility for subset, I also want to add it. But it should be an option. The members of working group in japan, are making a consensus about a subset. They internally call their subset as "Common Lisp Core". ida ida%utokyo-relay.csnet@csnet-relay.arpa  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 07:46:59 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 9 Jun 86 04:38:09 PDT Received: by su-shasta.arpa; Mon, 9 Jun 86 04:36:33 PDT Received: by nttlab.ntt.junet (4.12/4.7JC-7) with TCP id AA23501; Mon, 9 Jun 86 15:58:15 jst From: hagiya@kurims.kurims.kyoto-u.junet Received: by nttlab.ntt.junet (4.12/4.7JC-7) with TCP id AA23374; Mon, 9 Jun 86 15:52:22 jst Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00874; Thu, 5 Jun 86 19:17:09+0900 Date: Thu, 5 Jun 86 19:17:09+0900 Message-Id: <8606051017.AA00874@kurims.kyoto-u.junet> To: Common-Lisp@su-ai.arpa Subject: long-char, kanji I think that all the discussions on extending character code are made with the intent of defining the `international' version of Common Lisp. The discussions, therefore, seem to keep the pure version of Common Lisp intact and add some alien features to it by introducing such (ugly) names as "FAT-...", "PLUMP-...", etc.; as a result of the extension, the complex type hierarchy of Common Lisp becomes more complex. Complexity is sometimes unavoidable. However, I hope the simplest solution will also be permitted, as Moon argues: Another solution that should be permitted by the language is to have only one representation for strings, which is fat enough to accomodate all characters. In some environments the frequency of thin strings might be low enough that the storage savings would not justify the extra complexity of optimizing strings that contain only STRING-CHARs. Common Lisp (or Lisp) is a very flexible language, and in an extreme case, we can replace all the special forms, macros and functions of Common Lisp with their Japanese (or Chinese or any language) counterparts by preparing appropriate macro packages. In that case, the doc-strings will also be written in Japanese. In such a completely Japanese environment, it's silly to treat 8-bit characters in a special way. I think that the discussions should be done along with how the extended code will be used. We can consider several situations. 1. Programs written in one language (other than English) manipulate data (characters or strings) in that language. 2. Programs written in English manipulate data in one language (other than English). 3. Programs written in English manipulate data in more than one language at the same time. I myself like to program in Japanese even if the program does not manipulate Japanese characters or strings. It's easier to device function names in mother tongue than in a foreign language. By the way, no one seems to stress the importance of extending symbol names. If an extension, for example, allows simple strings to hold only 8 bit-characters and symbol names should be simple strings, then I completely disagree with that. Masami Hagiya  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 9 Jun 86 00:48:04 EDT Received: from MIT-MULTICS.ARPA by SU-AI.ARPA with TCP; 8 Jun 86 21:39:51 PDT Date: Mon, 9 Jun 86 00:35 EDT From: Hvatum@MIT-MULTICS.ARPA Subject: Closure, Null Lexical Env. To: common-lisp@SU-AI.ARPA Message-ID: <860609043512.124364@MIT-MULTICS.ARPA> From: Steve Bacher at Draper Lab Subject: Constructing a Closure over the Null Lexical Environment Replying to a previous message from Miller.pa@Xerox.COM > What about > > (EVAL '#'(lambda (x y) ...) NIL) (eval '#'(lambda ...) nil) probably won't create a compiled definition, because the interpreter (eval, that is) may create an interpretive lexical closure when it sees (function ...), unlike the compiler which will create a compiled lexical closure. In any case, the lambda expression won't be compiled in (QUOTE (FUNCTION (LAMBDA ...)), which makes sense when you think about it. I don't think #.(eval ...) would necessarily work either, for the reasons mentioned above. Thought: Does CL specify anything about the nature of what is returned by #'(lambda ...) when it is evaluated (as opposed to being compiled)? In particular, do the interpreter and the compiler have to return isomorphic objects (as opposed to merely functionally identical objects)? Consider what DEFUN does in both contexts (I'm not talking about those famed implementations that compile DEFUN's on the fly).   Received: from SU-AI.ARPA by AI.AI.MIT.EDU 6 Jun 86 06:39:10 EDT Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 6 Jun 86 03:31:10 PDT Received: from utokyo-relay by csnet-relay.csnet id aa04180; 6 Jun 86 6:27 EDT Received: by u-tokyo.junet (4.12/4.9J-1[JUNET-CSNET]) id AA13546; Fri, 6 Jun 86 17:13:57+0900 Received: by ccut.u-tokyo.junet (4.12/6.1Junet) id AA07149; Fri, 6 Jun 86 16:34:09+0900 Date: Fri, 6 Jun 86 16:34:09+0900 From: Masayuki Ida Message-Id: <8606060734.AA07149@ccut.u-tokyo.junet> To: common-lisp@SU-AI.ARPA, ida@UTOKYO-RELAY.CSNET, nuyens.pa@XEROX.COM Subject: Re: long-char, kanji >From ccut!Shasta!@SU-AI.ARPA:nuyens.pa%Xerox.COM@u-tokyo.junet Wed Jun 4 11:49:48 1986 >Date: 3 Jun 86 15:22 PDT >Subject: re: long-char, kanji >To: common-lisp@su-ai.ARPA > > ... >representation: >Strings are represented as homogeneous simple vectors of thin (8 bit) or >fat (16 bit) characters. Ignoring storage taken to represent them, the >difference between fat characters and thin characters is transparent to >the user. In particular, since we allow fat characters in symbol print >names, we use an equivalent of Ida's string-normalize function to >guarantee unique representation for hashing. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is the most important decision point, I think. I agree to do. With Moon's idea, the relation of thin- and fat- is like a relation of fixnum and bignum. This means the characters in fat-char and in thin-char are completely independent. But, any character set may contain the characters which have same appearance with standard-char. Such as the space code, alphabetic characters, and terminating-macro characters. Actually JIS 6226 has an another code for the standard-char. I think, other foreign character set may have characters which have same visual figure as standard-char. With Moon's idea of ASSURE-FAT-STRING, once there is a fat-char in a string, it can not be reduced to thin-string, even if the modification made it to the string with the characters which can be representable in thin-code only. > >kanji: >NS includes all "JIS C 6226" graphic characters including the 6300 most >common Japanese kanji. There are also Hiragana and Katakana character >codes specified. (While there is substantial overlap with the Japanese >kanji, Chinese characters are semantically separate and their character >code assignments have not yet been published.) > The reason why I stick to kanji issue is not only I am a japanese, but I feel it is the test case to cope with multi-byte characters and as a Common Lisper, I feel a need to polish up the character data type. >type hierarchy: >Since we have char-bits-limit = char-font-limit = 1, STANDARD-CHAR is >the same as STRING-CHAR. I agree with Moon that STRING should be >(VECTOR CHARACTER) and provide specialisations (even though this is a >change from the status quo). In our applications, we do as Fahlman >suggests and use external data-structures to represent the sort of >information encoded in "styles". (It is hard to standardize which >attributes should be made part of style (some people claim "case" should >be a style bit!)). I like "style" idea also. I don't want to use font. > >number of character codes required: >At first glance it seems hard to imagine exceeding 16 bits. Note >however that the 7200 characters in NS don't include Chinese, Korean, >Farsi, Hindi, etc. How many times have you been *sure* that the FOO >field wouldn't be required to be larger than 16 bits? > As far as the japanese character set concerned, 16 bits for char-code is enough. But, as an international standard, I feel the room for more bits is needed. > >Greg Nuyens >Text, Graphics and Printing, >Xerox AI Systems > > Masayuki Ida ida%utokyo-relay.csnet@csnet-relay.arpa  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 6 Jun 86 04:26:09 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 6 Jun 86 01:16:35 PDT Received: by su-shasta.arpa; Fri, 6 Jun 86 01:14:51 PDT Received: by nttlab.ntt.junet (4.12/4.7JC-7) with TCP id AA00370; Thu, 5 Jun 86 16:28:23 jst From: yuasa@kurims.kurims.kyoto-u.junet Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00541; Thu, 5 Jun 86 15:11:25+0900 Date: Thu, 5 Jun 86 15:11:25+0900 Message-Id: <8606050611.AA00541@kurims.kyoto-u.junet> To: Common-Lisp@su-ai.arpa Subject: some questions Date: Fri, 30 May 1986 03:29 EDT From: Rob MacLachlan To: miller.pa@XEROX.COM Cc: Common-lisp@SU-AI.ARPA Subject: Some questions In-Reply-To: Msg of 30 May 1986 00:21-EDT from miller.pa at Xerox.COM Status: R Date: Friday, 30 May 1986 00:21-EDT From: miller.pa at Xerox.COM To: Common-lisp at SU-AI.ARPA Re: Some questions Now that all you people and companies out there are doing all these common-lisp compilers, I'd like to ask you some questions: Do you transform the source to continuation passing style before compiling? How happy are you with this decision? If you don't, do you do tail-recursion optimizations anyway? If you do, do you do multiple values by calling the continuation with multiple arguments? ...... Many of the Common Lisps floating around are based on Spice Lisp and this compiler. Some exceptions are Symbolics, Lucid and KCL. The current KCL compiler is a two-pass compiler which generates portable C code. (I do not want to repeat the discussions on why C code and how it is possible. Please refer to the KCL Report for this issue.) The main roles of the first pass are: 1. to discriminate those lexical objects (variables, functions, blocks, and tags) that are to be closed up in a closure from those not. 2. to collect optimization information such as type information and reference information. Most of the compile-time error checking is done during the first pass. The first pass generates intermediate tree code that is similar to the original Lisp code but is scattered with the collected information all around. The second pass is the main pass and is responsible for the rest of the compilation job. Tail-recursive optimization is done during the second pass. Many other optimization hackings take place also during the second pass. KCL is not a descendant of any other Lisp. The only reference material we had was the draft of CLtL during the development of the first running version of KCL (October 1983 to March 1984). (CLtL itself was not yet published at that time.) We wrote every line of the KCL code by ourselves. (There is one exception: we later obtained the code of RATIONALIZE from the Spice project and used that code without changes.) Thus KCL was oriented for Common Lisp from the very beginning. In particular, the compiler supports many Common Lisp features: The fact that the KCL compiler is a two-pass compiler already indicates this. (We could make it a one-pass compiler if the lexical closure were not supported by Common Lisp.) In addition, the compiler takes care of keyword parameters for compile-time dispatching of such functions as MEMBER and ASSOC. It also tries to avoid creation of lexical closures in such situations as in MAP functions, FUNCALL, and APPLY. It tries to avoid making a list for &REST parameters in some situations, etc, etc, etc.... But it is true that there still remains big room for improvement. -- Taiichi  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 5 Jun 86 11:30:49 EDT Received: from IMSSS by SU-AI with PUP; 05-Jun-86 08:20 PDT Date: 5 Jun 1986 0817-PDT From: Rem@IMSSS Subject: I claim subsetting doesn't kill portability To: COMMON-LISP@SU-AI I agree with Krall that if the LISP or whatever standardly-named package doesn't contain all of the CL language (or the subset) and exactly that with nothing else, you have trouble with portability. Krall seems to be saying there's no way to have the LISP package contain exactly the subset and have the LISP package contain exactly fullblown CL. Well, with loading of compiled files into an existing LISP package and with multiple startup shell scripts (CMS EXEC files etc.), I claim it is possible. You simply put that part of the LISP system which absolutely must be handcoded or cross-compiled in the kernel, and put all the rest in runtime-loadable (binary program space) modules. You can then have well-defined supersets of the kernel which are subsets of the fullblown system (or if you want, kernel-supersets which are NOT subsets of CL). You have a different shell script for each well-defined (documented) superset of the kernel. The method can be (1) actually load in all the extra modules each time the user boots a new user environment ("core image"), (2) simply install autoload properties for all the functions in all the additional modules that are to be made available, or (3) pre-load all the actual modules or autoload properties and save the core image or savefile on the disk to be quickly restored by any user. Installation of CL on a given system can then consist of (a) building the kernel, (b) compiling all the loadable modules, (c) creating all the core-images of savefiles if option (3) was used, (d) setting up all the shell scripts for the various supersets of the kernel, (e) making documentation for the CL available in hardcopy or online, including definition of the various subsets available, and (f) announcing the various shell scripts available, including telling which CL subsets (kernel supersets) correspond to which shell scripts. Porting a program written in an official CL-subset (kernel-superset) from one machine to another then becomes simple: (aa) FTP the source, (bb) read the comment at the top of the source that tells which subset it needs, (cc) check the annoncements of shell scripts to see which one corresponds to that subset, (dd) run it. That algorithm doesn't work if the target machine didn't have the particular subset on which the original program was developed and which it needed to run, but the same would be true if the target machine didn't have CL at all, as would likely be the case if it was a small machine and CL didn't have subsets defined, so that argument isn't a refutation to my claim. Anybody able to find a sound refutation/rebuttal to my claim that Krall's view is wrong? -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 5 Jun 86 09:33:04 EDT Received: from MCC.ARPA by SU-AI.ARPA with TCP; 5 Jun 86 06:22:57 PDT Received: from mcc-pp by MCC.ARPA with TCP; Thu 5 Jun 86 08:22:42-CDT Posted-Date: Thursday, 5 June 1986, 08:24-CDT Message-Id: <8606051324.AA19847@mcc-pp> Received: from amber by mcc-pp (4.12/RKA.860601) id AA19847; Thu, 5 Jun 86 08:24:00 cdt Date: Thursday, 5 June 1986, 08:24-CDT From: Sender: KRALL%mcc-pp@mcc Subject: a standardization proposal To: common-lisp%SU-AI.ARPA@mcc Cc: guzman%mcc-pp@mcc, hall%mcc-pp@mcc I have been monitoring the portability issue for some time and have been dismayed by the subset contingent. The Lisp community must learn from Ada: subsets are useful for testing, training, teaching, but not for portability. A program is fully portable I. if every symbol used in the program is instantiated (1) in the "Common-Lisp-only" package, or (2) in the package associated with program itself, and II. if the program uses only the standard readtable or one defined by the program itself, and III. if it was written with porting to many systems as a goal, and IV. if "everything else is OK!". Thus no "compatible" extensions may be used, and -- most importantly -- there is no other package of symbols visible in the program. Every symbol used is in (1) or (2) and everything not in (1) and (2) is completely shadowed. Implication: if I want a specific mathematical function in my programs, I MUST reproduce ALL of the code of the function and everything it depends on and put it into my program! Even then I may not be able to reproduce the environment successfully. Implication #2: the Common Lisp packages must be identical throughout the target world, containing no more nor no less than any other Common Lisp package if the program is to be ported. I don't believe the portability impact on Lisp is any different than that of C or Pascal or Ada; it's just that Lisp programmers take advantage of "goodies". In C the program is delivered with its library. So too in Lisp. I urge the Common Lisp community to reject the pleas for subset definition. -Ed Krall Microelectronics and Computer Technology Corporation 9430 Research Blvd. Austin, Texas 78759 (512) 834-3406 ARPA: krall@mcc.arpa UUCP: {ihnp4,seismo,ctvax}!ut-sally!im4u!milano!mcc-pp!krall  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 17:32:40 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 14:15:55 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2364; 4 Jun 86 17:15:49-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 22384; Wed 4-Jun-86 17:17:38-EDT Date: Wed, 4 Jun 86 17:14 EST Sender: mike@a To: LOOSEMORE@UTAH-20.ARPA From: mike%acorn@oak.lcs.mit.edu Subject: a standardization proposal Cc: common-lisp@SU-AI.ARPA Date: Wed 4 Jun 86 11:20:28-MDT From: SANDRA ..... As a solution to this, I would like to see things in Common Lisp divided into two distinct categories: (1) things that *every* implementation *must* provide to call itself "Common Lisp"; and (2) things that an implementation need not provide, but for which a standardized interface is desirable. Moreover, I would like to see things in category (2) given standardized names which can be present in *features*, so that you can readily tell whether or not the implementation supports that feature. ..... I believe that there was a similar proposal to break the language up into a "core" plus various modules around at the time of the Swiss Cheese edition of the manual, but it was removed for lack of interest. Are people still of the opinion that this is a useless idea, or is there more motivation for it now that we have a bit more experience with the language? I think this is a good idea. -Sandra Loosemore ------- I have heard that in Europe there is sentiment to have a "core" common lisp and several rings round it to add functionality. This is in reaction to all the manufacturers having their own dialect of "foobar-Common-Lisp", none of which can really claim to be since there is no real agreement other than a vague similarity to something described in CLtL. ...mike beckerle Gold Hill Computers  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 15:23:34 EDT Received: from UTAH-20.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 12:10:23 PDT Date: Wed 4 Jun 86 11:20:28-MDT From: SANDRA Subject: a standardization proposal To: common-lisp@SU-AI.ARPA Message-ID: <12212167652.9.LOOSEMORE@UTAH-20.ARPA> It seems like most of the ongoing problems with Common Lisp derive from places in the manual where there has been some effort made to standardize inherently non-portable features. The net result is that many implementations are forced to provide support for functions that are essentially useless, but the standard is still too weak to enforce that programs that attempt to use these features be portable. The business about character bits and font is a perfect example: any program that uses them is potentially non-portable anyway, so it's rather silly to insist that every implementation provide functions for dealing with them. As a solution to this, I would like to see things in Common Lisp divided into two distinct categories: (1) things that *every* implementation *must* provide to call itself "Common Lisp"; and (2) things that an implementation need not provide, but for which a standardized interface is desirable. Moreover, I would like to see things in category (2) given standardized names which can be present in *features*, so that you can readily tell whether or not the implementation supports that feature. So, if somebody has a proposal for handling generalized input events (or Kanji characters or screen manipulation functions or whatever) that is generally acceptable to others interested in implementing such a feature, it could be given a standard name and documented in the manual as being optional. Implementors that aren't interested in it, or that can't support it given the constraints of the operating system environment or whatever, are under no obligation to provide it. Programs that depend on the feature can test for its presence before using it. This approach would also be useful for defining various levels of CL subsets. For example, does an implementation designed for educational use on microcomputers really need to support complex numbers and four kinds of floats? Removing complex numbers from the language core and making it an optional feature would at least give us a standardized way of talking about what things a given implementation does and doesn't support. I believe that there was a similar proposal to break the language up into a "core" plus various modules around at the time of the Swiss Cheese edition of the manual, but it was removed for lack of interest. Are people still of the opinion that this is a useless idea, or is there more motivation for it now that we have a bit more experience with the language? -Sandra Loosemore -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 10:28:37 EDT Received: from BBNG.ARPA by SU-AI.ARPA with TCP; 4 Jun 86 07:20:02 PDT Date: 4 Jun 1986 10:19-EDT Sender: NGALL@G.BBN.COM Subject: Re: Are isomorphic structures EQUAL? From: NGALL@G.BBN.COM To: Moon@SCRC-STONY-BROOK.ARPA Cc: common-lisp@SU-AI.ARPA Message-ID: <[G.BBN.COM] 4-Jun-86 10:19:08.NGALL> In-Reply-To: <860528183719.5.MOON@EUPHRATES.SCRC.Symbolics.COM> From: David A. Moon Subject: Are isomorphic structures EQUAL? .... There will be no way to do a component-wise test (using EQUAL on each component) on two structures unless one writes a structure-specific equality predicate. Therefore, I propose that two structures of the same type that are created by DEFSTRUCT (regardless of the :TYPE option) be tested for equality component-wise and that the CLtL make this clear. Until then, she can get by with EQUALP (since her structures don't conatin strings, floats, etc.). I don't understand this. I can't find anything in the manual that says that EQUALP behaves differently than EQUAL for structures. Actually I can't find anything in the manual that says anything at all about the behavior of either EQUAL or EQUALP on structures. It may be that by coincidence the two implementations you mentioned in your message both compare components of structures (with the default defstruct options, especially no :TYPE) in EQUALP, and do not compare components of structures in EQUAL, but I don't think the Common Lisp manual says that all implementations have to do that. The definition of EQUAL on pg. 80 says, "Certain objects that have components are EQUAL if they are of the of the same type and corresponding components are EQUAL." Following this line is an enumeration of these "certain objects". The definition of EQUALP on the next page has an identical line (substituting EQUALP for EQUAL) EXCEPT it omits the word "certain". I (and apparently the two implementations that I mentioned) interpret this to mean that EQUALP does behave differently from EQUAL, in that EQUALP performs a component-wise equality check on ALL objects that are defined to have componentsin CLtL. It just so happens that objects defined by defstruct are the only objects not enumerated in the definition of EQUAL that would be covered by EQUALP. I agree that CLtL isn't explicit about the behavior of EQUAL and EQAULP on structures, but I think the behavior of EQUAL and EQUALP in the two implementations is due their interpreting the relevant definitions as I did above, not to coincidence. I am curious how other implementations defined EQUAL and EQUALP's behavior on structures. I just tried it on KCL and they behave identically: both do a component-wise test (this is the behavior I personally prefer). I'd say that this is a fairly major ambiguity in CLtL and should be clarified as soon as possible (even if just informally). -- Nick  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 10:20:05 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 4 Jun 86 07:11:46 PDT Received: ID ; Wed 4 Jun 86 10:11:31-EDT Date: Wed, 4 Jun 1986 10:11 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Subject: Guidelines for the Standard In-reply-to: Msg of 4 Jun 1986 02:48-EDT from Christopher Fry To: Christopher Fry Well, we all realize that there are a lot of redundant forms here, and that we could get away with any of several subsets. ELT would not be the one I would choose to keep, if I had to choose, but that's not really the point. The point is that people wanted these kept for compatibility with older Lisps (and with older Lisp programmers). I don't think that many people maintaining large bodies of code would share your enthusiasm for a radical simplification at this point. I wouldn't mind seeing some style guidelines that point to certain of these forms as the ones to use in new code, so that in a few years we can get rid of the ones not in favor, but I don't think that such guidelines ought to be part of a specification document. Maybe we need a companion document of style guidelines, though I suspect that getting people to agree on these would be pretty bloody. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 03:04:28 EDT Received: from REAGAN.AI.MIT.EDU by SU-AI.ARPA with TCP; 3 Jun 86 23:56:49 PDT Received: from KAREN.AI.MIT.EDU by REAGAN.AI.MIT.EDU via CHAOS with CHAOS-MAIL id 34661; Wed 4-Jun-86 02:55:32-EDT Date: Wed, 4 Jun 86 02:56 EDT From: Christopher Fry Subject: long-char, kanji To: Fahlman@C.CS.CMU.EDU, evan@SU-CSLI.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860604025644.3.CFRY@KAREN.AI.MIT.EDU> Anyway, the question was not whether you ever want to use the notion of "fonts" and "bits" in some generic sense; the question was whether you currently use the Char-Font and Char-Bits facility as currently defined in the language. Coral has not yet used char-font and char-bits. If people haven't made a lot of use of these in their current form, we are free to think about whether there is some better way of providing the comparable functionality, and how much of the better way ought to be in the standard part of the language. I'd like to see a convention for input events that could easily accomodate a mouse. My maximal mouse has 15 keys and 6 degrees of motion, with 1 mouse for each hand, so three buttons plus X and Y won't do.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 4 Jun 86 02:59:37 EDT Received: from REAGAN.AI.MIT.EDU by SU-AI.ARPA with TCP; 3 Jun 86 23:48:35 PDT Received: from KAREN.AI.MIT.EDU by REAGAN.AI.MIT.EDU via CHAOS with CHAOS-MAIL id 34660; Wed 4-Jun-86 02:47:11-EDT Date: Wed, 4 Jun 86 02:48 EDT From: Christopher Fry Subject: Guidelines for the Standard To: Fahlman@C.CS.CMU.EDU, common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860604024822.2.CFRY@KAREN.AI.MIT.EDU> One could also argue (weakly) No, probably strongly! that this language is doomed to a certain amount of inconsistency in naming and argument order. Any set of rules you establish for this is going to break down in certain places. Yeah but we could be a lot more consistent than we are now. I meant only to eliminate unnecessary inconsistency such as nth takes it's index as the first arg and elt takes its index as the 2nd arg. [Below I propose flushing NTH which solves this particular inconsistency.] (-: By the way, how else would you spell RPLACA? We kept this around mostly as a historical monument, and any tampering would amount to desecration. Here in Boston we have a place called the Computer Museum. I'm sure we could find an honorable resting place for RPLACA there. It does an ugly thing, so it should be ugly. Well, you've got a point. Frequently new cars are uglier than old ones so replacing your old car can be ugly. Only wimps use Setf of Car. But we artists are allowed to differ in matters of aesthetic judgement. :-) Touche! I forgot that ART can be used to argue both sides of any dispute. ...... I'd flush both rplaca AND car. (setf (elt foo) new-value) where the 2nd arg to elt defaults to 0. And, in case you're wondering about rplacd... (setf (rest foo) new-value) I'd extend rest to take an optional 2nd arg of index so that we could flush cdr AND nthcdr. This new arg to rest defaults to 1. [A value of 0 just returns the first arg] I'd also flush the rest of the car and cdr derived fns, as well as first -> tenth and nth [use elt instead]. The above proposals eliminate 42 functions from CL, whose functionality is pretty easily accomodated by the addition of 1 optional arg to each of 2 existing functions. [Readers of Hitchikers Guide to the Galaxy can now infer the question to which the answer is 42.] If you tell me that CADAR functionality is essential to CL, then I'd propose that elt be able to take a list of integers which specifies a "list-path" to get to any arbitrary element or cons of a [nested] list. With this notation, we could flush REST too, [my list-path notation allows the specification of tails of lists as well] but I'm not sure I'm willing to go to THAT extreme. [Though it would bring the total up to 43]. elt vs nth as far as efficency is concerned can be fixed with a declaration, when you really care. A smart compiler can turn (elt foo 1) into the same code as (cadr foo). Notice there is exactly 1 character difference in length for those of you concerned about typing. Getting just a little fancier, a negative index to elt or rest could index from the END of the list instead of the beginning. Thus (rest foo -1) is equivalent to (last foo) . [I'm up to 44] These negative indicies have the disadvantage of missing a common error check, like for instance, when you're in a loop decrimenting the index and using that index in a call to NTH, you'd like something to tell you if your index goes illegally below 0. This could be fixed by having an optional 3rd arg PERMIT-NEGATIVE-INDEX? which defaults to t, and is primarily used by the compiler. But you still need to check ranges for too positive or too negative anyway so maybe this point is moot. ...... As was pointed out during the subset discussion recently [I think by RAM], eliminating these forty odd functions would not decrease the size of an implementation of CL by 44/620. It would make it a little smaller, but it would make it easier for people to learn to code in CL and, even more dramatically, make it easier to read CL. There are other redundancies and inconsistencies in CL of negative value. The above is just an example of the kinds of mods that could be made. Do I hear 45? ps: I've already implemented all the stuff I've proposed cutting so this isn't saving me any work.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 19:10:51 EDT Received: from SU-GLACIER.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 16:01:46 PDT Received: by su-glacier.arpa with Sendmail; Tue, 3 Jun 86 16:02:19 pdt Received: from pachyderm.UUCP (pachyderm.ARPA) by mips.UUCP (4.12/4.7) id AA23650; Tue, 3 Jun 86 15:04:22 pdt Received: by pachyderm.UUCP (4.12/4.7) id AA06981; Tue, 3 Jun 86 15:02:11 pdt Date: Tue, 3 Jun 86 15:02:11 pdt From: mips!pachyderm.earl@su-glacier.arpa (Earl Killian) Message-Id: <8606032202.AA06981@pachyderm.UUCP> To: glacier!RAM@C.CS.CMU.EDU Cc: Common-Lisp@SU-AI.ARPA In-Reply-To: Rob MacLachlan's message of Tue, 3 Jun 1986 03:02 EDT Subject: tail recursion optimization Date: Tue, 3 Jun 1986 03:02 EDT From: Rob MacLachlan If this were a beauty contest then you would surely win, since Common Lisp doesn't currently have any way to talk about this sort of compile-time binding of global names. I think that this deficiency should be remedied by adding a notion of block compilation. Common Lisp certainly does have a way to talk about this sort of thing. My original question to the Common-Lisp mailing list (2 years ago?), was something like "Is (DEFUN foo (a...) body) equivalent to (SETF (SYMBOL-FUNCTION 'foo) #'(LAMBDA (a...) body)) or (SETF (SYMBOL-FUNCTION 'foo) (LABELS ((foo (a...) body)) #'foo)) ?" LABELS also accomodates block compilation easily. The only question is whether it is used by DEFUN or not. Various people objected to this. The only argument I can remember after this long a time (there are archives of this stuff of course) is that TRACE wouldn't work on functions that branched back because TRACE works by using the SYMBOL-FUNCTION cell. Of course, TRACE could be fixed instead of breaking DEFUN, but that's not how things were left... P.S. I think DEFUN ought to be defined in the manual by LISP code such as the above to avoid ambiguity. Implementations will of course add cruft to remember old definitions and issue warnings and stuff, but the basic operation should be well-defined in terms of simpler LISP code.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 18:47:47 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 15:38:12 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2309; 3 Jun 86 18:38:15-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 22071; Tue 3-Jun-86 16:59:41-EDT Date: Tue, 3 Jun 86 16:56 EST From: mike@a To: JAR%MX.LCS.MIT.EDU@MC.LCS.MIT.EDU Subject: type disjointness Cc: Moon@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA Date: Tue, 3 Jun 86 14:14:25 EDT From: Jonathan A Rees Date: Thu, 29 May 86 18:21 EDT From: David A. Moon From: JAR I would like to change the language so that the the type of structures (whose DEFSTRUCT doesn't use :TYPE) is disjoint from other types. Your arguments for this are good, but on the other hand this change would be adding substantial new constraints on implementations, wouldn't it? Probably the reason why CLtL is so coy about exactly how structures are implemented is to maximize implementation freedom for some reason. Yes, that's the reason, but I'd be surprised if this freedom was really essential. Implementing my suggestion would be easy in some implementations and not so easy in others. Certainly a correct way to implement this in any system would be to change VECTORP or whetever predicate is true of structures so that it checks for structureness before returning true. But this might be considered too costly. However, CL already has so many types that it would surprise me if a system existed in which adding one more was really difficult. I'd like to hear from implementors who would have a really hard time with this -- I would probably learn something. Jonathan  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 18:43:57 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 3 Jun 86 15:27:58 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 03 JUN 86 15:26:29 PDT Date: 3 Jun 86 15:22 PDT From: nuyens.pa@Xerox.COM Subject: re: long-char, kanji To: common-lisp@su-ai.ARPA Reply-to: nuyens.pa@Xerox.COM Message-ID: <860603-152629-1971@Xerox> This is just a note to expand on Fischer's info about the Xerox corporate character code standard, one of the protocols that comprise the Printing and Network Systems. The character code standard (usually just called NS characters) specifies character codes to represent text fragments. This requires a mapping of character codes to graphic, rendering and control characters together with an interchange standard describing the legal encoding of strings of these codes. The NS encoding is a 16-bit encoding with escapes for multibyte characters. Single byte characters are essentially in the familiar ISO/ANSI encoding. There are currently approximately 7100 character codes assigned. NS is only specified as an interchange standard. In Xerox Lisp, we use NS characters without escapes as the internal representation. (I would discourage including escapes in internal representations, since constant time random access is reduced to linear scans. This is a separate decision for external representation.) Rendering codes reside in a separate region of the character code space. Unlike graphic character codes (e.g. STANDARD-CHAR) which determine the information included in a text fragment, rendering codes are only used to specify appearance. e.g. To avoid the question of whether (and how) a text search stops at an ffl ligature when searching for "ff", rendering characters are only included in rendered images of a document (to send to a printer, for instance). To give specific info about how our implementation of NS characters addresses some of the problems mentioned recently on the list: representation: Strings are represented as homogeneous simple vectors of thin (8 bit) or fat (16 bit) characters. Ignoring storage taken to represent them, the difference between fat characters and thin characters is transparent to the user. In particular, since we allow fat characters in symbol print names, we use an equivalent of Ida's string-normalize function to guarantee unique representation for hashing. kanji: NS includes all "JIS C 6226" graphic characters including the 6300 most common Japanese kanji. There are also Hiragana and Katakana character codes specified. (While there is substantial overlap with the Japanese kanji, Chinese characters are semantically separate and their character code assignments have not yet been published.) type hierarchy: Since we have char-bits-limit = char-font-limit = 1, STANDARD-CHAR is the same as STRING-CHAR. I agree with Moon that STRING should be (VECTOR CHARACTER) and provide specialisations (even though this is a change from the status quo). In our applications, we do as Fahlman suggests and use external data-structures to represent the sort of information encoded in "styles". (It is hard to standardize which attributes should be made part of style (some people claim "case" should be a style bit!)). number of character codes required: At first glance it seems hard to imagine exceeding 16 bits. Note however that the 7200 characters in NS don't include Chinese, Korean, Farsi, Hindi, etc. How many times have you been *sure* that the FOO field wouldn't be required to be larger than 16 bits? A more detailed description of the NS character encoding is available in XSIS 058404 available from: Xerox Systems Institute 2100 Geng Rd. Palo Alto, CA 94303 attn: Pam Cance (tell her you were referred by XAIS and your odds of having any fee waived are good.) phone: 415-496-6511 Greg Nuyens Text, Graphics and Printing, Xerox AI Systems  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 14:24:52 EDT Received: from MC.LCS.MIT.EDU by SU-AI.ARPA with TCP; 3 Jun 86 11:14:31 PDT Received: from MX.LCS.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 3 JUN 86 14:14:15 EDT Date: Tue, 3 Jun 86 14:14:25 EDT From: Jonathan A Rees Subject: type disjointness To: Moon@SCRC-STONY-BROOK.ARPA cc: common-lisp@SU-AI.ARPA In-reply-to: Msg of Thu 29 May 86 18:21 EDT from David A. Moon Message-ID: <[MX.LCS.MIT.EDU].924104.860603.JAR> Date: Thu, 29 May 86 18:21 EDT From: David A. Moon From: JAR I would like to change the language so that the the type of structures (whose DEFSTRUCT doesn't use :TYPE) is disjoint from other types. Your arguments for this are good, but on the other hand this change would be adding substantial new constraints on implementations, wouldn't it? Probably the reason why CLtL is so coy about exactly how structures are implemented is to maximize implementation freedom for some reason. Yes, that's the reason, but I'd be surprised if this freedom was really essential. Implementing my suggestion would be easy in some implementations and not so easy in others. Certainly a correct way to implement this in any system would be to change VECTORP or whetever predicate is true of structures so that it checks for structureness before returning true. But this might be considered too costly. However, CL already has so many types that it would surprise me if a system existed in which adding one more was really difficult. I'd like to hear from implementors who would have a really hard time with this -- I would probably learn something. Jonathan  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 14:12:13 EDT Received: from IMSSS by SU-AI with PUP; 03-Jun-86 10:59 PDT Date: 3 Jun 1986 1057-PDT From: Rem@IMSSS Subject: Bits for input, fonts for output To: COMMON-LISP@SU-AI cc: Fahlman%C.CS.CMU.EDU@SCORE I agree with Fahlman's message, that neither keystrokes nor fonts & presentation is what we normally want internally in strings. I'd like to go further. For the most part, what you want in strings is similar to what you want in PNAMEs. Do you really want the name of some function to be [First character plain vanilla "F", second character plain vanilla "O", third character HYPER-META-"O"?] so that you can't access that function without somehow getting HYPER-META-"O" actually into LISP rather than escaping to some user-interface side-effect such as inserting ten blank lines in the current window? Do you really want the name of some function to be "FOO" in the Bodini font, so you have in that font or you can't access that function? True there are some plain vanilla characters you hardly ever want in the PNAME of an identifier/symbol, but in general you should be able to do INTERN or STRING2ID or whatever on an arbitrary string of whatever you consider legal characters. Any information that isn't reasonable for coercing into a PNAME shouldn't be considered a normal part of strings either. Use extended strings (fat strings or whatever) for all that other cruft. Re the Japanese person who wants to include kanji etc. in PNAMEs of symbols (names of functions and variables mostly), I agree. Get rid of fonts and bit attributes in PNAMEs, but allow an implementation to have fat characters in vanilla strings and PNAMEs. -- One problem, how do you exchange programs between ASCII-only implementations and KANJI implementations? I don't know the answer. It may be that we can't adapt CL to handle kanji in the right way (portably) without adding gross overhead to the ASCII-only implementations. It may simply not be reasonable to extend CL to handle kanji in a portable way, thus we may relegate kanji to an extension rather than to portable CLtL. Anybody have any workable solutions?? Or should we punt on kanji this year? -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 13:55:35 EDT Received: from IMSSS by SU-AI with PUP; 03-Jun-86 10:45 PDT Date: 3 Jun 1986 1043-PDT From: Rem@IMSSS Subject: Flush bits from standard To: COMMON-LISP@SU-AI As somebody said, the semantics of bit attributes on characters is dependent on the implementation, therefore no portable program can be written to effectively use them. (Probably same is true of font attributes.) In general, when a feature can't effectively be used by portable programs, it shouldn't be considered part of the portable standard. The only exception would be something like OPEN which is so absolutely essential to all implementations that it's inconceivable an implementation wouldn't provide it, and many parts of its description CAN be standardized. But in the main, something that can't be effectively used by portable programs shouldn't be in CLtL. Somebody complained about the need to store arbitrary keystrokes in characters. I don't see how this is an argument for portable bit attributes. The keyboards on each machine are different, and thus different numbers of bit attributes will be needed on each machine, and their semantics will be different too. (What one machine calls META will be called EDIT on another, one machine will have a mouse-interrupt bit while another won't. How can a program be written to insist that HYPER-META-Q is the command for quitting a failing grammar production in an interactive natural-language-parser program, if my machine never heard of HYPER and doesn't have enough bits to represent all the different commands even by juggling them around? VM/CMS doesn't even have CONTROL or META bits, but has more than 128 basic keys on the keyboard (treating "a" and shift-"a" i.e. "A" as different, treating and alt- i.e. as different, treating and shift- i.e. as different).) I vote for stripping bit&font attributes out of the language, but would accept carefully making them as vague as possible (just like namestrings and pathnames for OPEN) and documenting just the hook in the manual, saying the details are implementation dependent, so nobody will try to write portable code using the attributes directly, but rather will put the attribute-hacking code in a separate system-dependent file/module. The manual should of course everywhere distinguish between where it exactly specifies the semantics and where it merely specifies the syntax and general idea of semantics but leaves the detailed semantics up to the implementation, with the latter kept to the minimum. -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 11:39:34 EDT Received: from KIM.Berkeley.EDU by SU-AI.ARPA with TCP; 3 Jun 86 08:30:18 PDT Received: by kim.Berkeley.EDU (5.51/1.12) id AA15409; Tue, 3 Jun 86 08:30:08 PDT Received: from fimass by franz (5.5/3.14) id AA02763; Tue, 3 Jun 86 07:39:43 PDT Received: by fimass (5.5/3.14) id AA01279; Tue, 3 Jun 86 06:27:57 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8606031427.AA01279@fimass> To: common-lisp@su-ai.arpa Subject: Re: tail recursion optimization In-Reply-To: Your message of Tue, 03 Jun 86 03:02:00 EDT. Date: Tue, 03 Jun 86 06:27:53 PST To answer earl's question: yes, a self-tail-recursive call is made with a jump to the beginning of the function code. I agree with Rob, this is an important optimization and 99% of the time is it precisely what you want. I'd suggest that the Common Lisp Standard state that within the defun of foo, there is an impliclit (declare (inline foo)) and if the user plans on redefining foo while executing foo, it is his burden to (declare (notinline foo)). -john foderaro, franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 11:26:10 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 3 Jun 86 08:14:51 PDT Received: ID ; Tue 3 Jun 86 11:14:32-EDT Date: Tue, 3 Jun 1986 11:14 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: Evan Kirshenbaum Cc: common-lisp@SU-AI.ARPA Subject: long-char, kanji Sorry, that last message got away. Someday someone will write a decent mail program that is tolerant of sloppy typists... I haven't used fonts, but only because I wasn't on a machine that could do them. I've since moved to an Explorer, and I'm sure that now that I have them, I'll use them. Also, now that I *have* a computer with Control, Meta, Super and Hyper keys, I'd be very upset if I couldn't input any combination I can type as a character. My vote is for leaving at least bits (and probably font) in the language. As long as string-char is there when you need efficiency, I don't see what's wrong with having full chars as well. It is clear that each implementation will want to provide Lisp programs with access to any keyboard event the user is capable of generating (and that the operating system doesn't censor). The point is not that we want to take these away from you, but that such "input events" are deeply unportable and that they don't have much in common with the kinds of characters you want to use internally, pack into strings, etc. Not only are there various bit fields to worry about, but in an input event you might want to distinguish between "3" from the nornal keyboard and "3" from the numeric pad, you might want a timestamp as part of this input, and so on. The suggestion is that hanging a "bits" field off each character is a half-baked solution that doesn't solve the full problem for input events, and that therefore doesn't get used in the ways we originally anticipated. What it does do is to needlessly complicate the representation of internal characters. The argument about "fonts" is parallel, but this time the issue is what you send to the display to get various effects. A font field doesn't really do the job, and it generally makes no sense to store this in strings. I'm still pondering whether I believe that the "style" is a legitimate part of an internal text string or is something you do on output. Anyway, the question was not whether you ever want to use the notion of "fonts" and "bits" in some generic sense; the question was whether you currently use the Char-Font and Char-Bits facility as currently defined in the language. If people haven't made a lot of use of these in their current form, we are free to think about whether there is some better way of providing the comparable functionality, and how much of the better way ought to be in the standard part of the language. If people have made a lot of use of this facility, we're probably stuck with the status quo, though all the implementors can conspire to make these fields of zero length. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 11:10:38 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 3 Jun 86 08:00:48 PDT Received: ID ; Tue 3 Jun 86 11:00:39-EDT Date: Tue, 3 Jun 1986 11:00 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: Evan Kirshenbaum Cc: common-lisp@SU-AI.ARPA Subject: long-char, kanji In-reply-to: Msg of 2 Jun 1986 12:53-EDT from Evan Kirshenbaum I haven't used fonts, but only because I wasn't on a machine that could do them. I've since moved to an Explorer, and I'm sure that now that I have them, I'll use them. Also, now that I *have* a computer with Control, Meta, Super and Hyper keys, I'd be very upset if I couldn't input any combination I can type as a character. My vote is for leaving at least bits (and probably font) in the language. As long as string-char is there when you need efficiency, I don't see what's wrong with having full chars as well. It is clear that each hardware/OS combination will want to provide Lisp programs with getting at any keyboard event the user is capable of generating  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 05:04:17 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 01:51:40 PDT Received: by su-shasta.arpa; Tue, 3 Jun 86 01:49:58 PDT Received: by nttlab.ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA08743; Tue, 3 Jun 86 12:43:45 jst From: yuasa@kurims.kurims.kyoto-u.junet Received: by nttlab.ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA08614; Tue, 3 Jun 86 12:39:19 jst Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00587; Mon, 2 Jun 86 12:41:57+0900 Date: Mon, 2 Jun 86 12:41:57+0900 Message-Id: <8606020341.AA00587@kurims.kyoto-u.junet> To: Common-lisp@su-ai.arpa Date: Fri, 30 May 1986 03:29 EDT From: Rob MacLachlan To: miller.pa@XEROX.COM Cc: Common-lisp@SU-AI.ARPA Subject: Some questions In-Reply-To: Msg of 30 May 1986 00:21-EDT from miller.pa at Xerox.COM Status: R Date: Friday, 30 May 1986 00:21-EDT From: miller.pa at Xerox.COM To: Common-lisp at SU-AI.ARPA Re: Some questions Now that all you people and companies out there are doing all these common-lisp compilers, I'd like to ask you some questions: Do you transform the source to continuation passing style before compiling? How happy are you with this decision? If you don't, do you do tail-recursion optimizations anyway? If you do, do you do multiple values by calling the continuation with multiple arguments? ...... Many of the Common Lisps floating around are based on Spice Lisp and this compiler. Some exceptions are Symbolics, Lucid and KCL. The current KCL compiler is a two-pass compiler which generates portable C code. (I do not want to repeat the discussions on why C code and how it is possible. Please refer to the KCL Report for this issue.) The main roles of the first pass are: 1. to discriminate those lexical objects (variables, functions, blocks, and tags) that are to be closed up in a closure from those not. 2. to collect optimization information such as type information and reference information. Most of the compile-time error checking is done during the first pass. The first pass generates intermediate tree code that is similar to the original Lisp code but is scattered with the collected information all around. The second pass is the main pass and is responsible for the rest of the compilation job. Tail-recursive optimization is done during the second pass. Many other optimization hackings take place also during the second pass. KCL is not a descendant of any other Lisp. The only reference material we had was the draft of CLtL during the development of the first running version of KCL (October 1983 to March 1984). (CLtL itself was not yet published at that time.) We wrote every line of the KCL code by ourselves. (There is one exception: we later obtained the code of RATIONALIZE from the Spice project and used that code without changes.) Thus KCL was oriented for Common Lisp from the very beginning. In particular, the compiler supports many Common Lisp features: The fact that the KCL compiler is a two-pass compiler already indicates this. (We could make it a one-pass compiler if the lexical closure were not supported by Common Lisp.) In addition, the compiler takes care of keyword parameters for compile-time dispatching of such functions as MEMBER and ASSOC. It also tries to avoid creation of lexical closures in such situations as in MAP functions, FUNCALL, and APPLY. It tries to avoid making a list for &REST parameters in some situations, etc, etc, etc.... But it is true that there still remains big room for improvement. -- Taiichi  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 05:02:51 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 3 Jun 86 01:51:49 PDT Received: by su-shasta.arpa; Tue, 3 Jun 86 01:50:08 PDT Received: by nttlab.ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA08754; Tue, 3 Jun 86 12:44:10 jst From: hagiya@kurims.kurims.kyoto-u.junet Received: by nttlab.ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA08724; Tue, 3 Jun 86 12:42:41 jst Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00200; Tue, 3 Jun 86 11:49:19+0900 Date: Tue, 3 Jun 86 11:49:19+0900 Message-Id: <8606030249.AA00200@kurims.kyoto-u.junet> To: Common-lisp@su-ai.arpa Subject: long-char, kanji I think that all the discussions on extending character code are made with the intent of defining the `international' version of Common Lisp. The discussions, therefore, seem to keep the pure version of Common Lisp intact and add some alien features to it by introducing such (ugly) names as "FAT-...", "PLUMP-...", etc.; as a result of the extension, the complex type hierarchy of Common Lisp becomes more complex. Complexity is sometimes unavoidable. However, I hope the simplest solution will also be permitted, as Moon argues: Another solution that should be permitted by the language is to have only one representation for strings, which is fat enough to accomodate all characters. In some environments the frequency of thin strings might be low enough that the storage savings would not justify the extra complexity of optimizing strings that contain only STRING-CHARs. Common Lisp (or Lisp) is a very flexible language, and in an extreme case, we can replace all the special forms, macros and functions of Common Lisp with their Japanese (or Chinese or any language) counterparts by preparing appropriate macro packages. In that case, the doc-strings will also be written in Japanese. In such a completely Japanese environment, it's silly to treat 8-bit characters in a special way. I think that the discussions should be done along with how the extended code will be used. We can consider several situations. 1. Programs written in one language (other than English) manipulate data (characters or strings) in that language. 2. Programs written in English manipulate data in one language (other than English). 3. Programs written in English manipulate data in more than one language at the same time. I myself like to program in Japanese even if the program does not manipulate Japanese characters or strings. It's easier to device function names in mother tongue than in a foreign language. By the way, no one seems to stress the importance of extending symbol names. If an extension, for example, allows simple strings to hold only 8 bit-characters and symbol names should be simple strings, then I completely disagree with that. Masami Hagiya  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 03:10:17 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 3 Jun 86 00:02:44 PDT Received: ID ; Tue 3 Jun 86 03:02:19-EDT Date: Tue, 3 Jun 1986 03:02 EDT Message-ID: From: Rob MacLachlan To: Kent M Pitman Cc: Common-Lisp@SU-AI.ARPA Subject: tail recursion optimization In-reply-to: Msg of 3 Jun 1986 02:27-EDT from Kent M Pitman I am not convinced the manual is unambiguous. This another one of the cases where your lisp culture influences your interpretation. On a lisp machine, doing a function call always means indirecting through the definition cell, so it is obvious to you that a self call will indirect through the definition cell. People who are used to using tail-recursion instead of loops will think that it is just as obvious that a tail-recursive self call turns into a branch. Spice Lisp, and probably other implementations, do this particular optimization, so the manual certainly wasn't sufficiently overt about disallowing it. We definitely need a clarification one way or the other, and it makes a great deal of sense to me to optimize the 99.99% case instead of the perverse one. You do need a way to inhibit this, but as I already said, that can be done with NOTINLINE. If this were a beauty contest then you would surely win, since Common Lisp doesn't currently have any way to talk about this sort of compile-time binding of global names. I think that this deficiency should be remedied by adding a notion of block compilation. On conventional architectures, there are huge advantages to resolving a function reference at compile time. To me, a defun is just a degenerate block which contains only one definition and one entry point. Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 02:39:17 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 2 Jun 86 23:29:26 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 14117; Tue 3-Jun-86 02:26:54 EDT Date: Tue, 3 Jun 86 02:27 EDT From: Kent M Pitman Subject: tail recursion optimization To: ram@cmu-cs-c cc: kmp@SCRC-STONY-BROOK.ARPA, Common-Lisp@SU-AI In-Reply-To: Message-ID: <860603022708.1.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> This isn't the sort of thing that is decided by a beauty contest. As Earl pointed out, the manual is clear on this point. By the way, the following is a valid CL function and I would be really irritated if it didn't work... (DEFUN FOO (X Y) (REQUIRE 'FOO-SUPPORT) ;`Autoload' real definition of FOO! (FOO X Y)) The definition of named functions on p59 seems to me to have pretty clear implications in this regard. Of course, nothing keeps you from writing a compiler which can conditionally compile code using your optimization. However, the default should be "off" for that switch when compiling CL programs since it would break portable programs. If someone were to turn on the switch, the code would not behave internally according to the CL model, although externally it could be called by CL code since it would be an abstraction violation of the caller to know whether the caller was implemented using recursion or iteration. Presumably your native SPICE:COMPILE and SPICE:COMILE-FILE functions could observe some switch variable which CL:COMPILE-FILE and LISP:COMPILE would bind to a safe value. By the way, it seems to me that a CL which doesn't allow asynchronous interrupts or multiprocessing in a shared address space should be allowed to code-walk an expression for proof that a particular function can be safely turned into tail-recursive call. However, my feeling is that systems which allow me to interrupt them and change global functions/values would be, although perhaps technicaly within their rights to do this optimization, also likely to confuse CL users whose model of their program might be violated and whose "portable" experience in debugging would prove violated.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 01:40:07 EDT Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 22:31:58 PDT Received: from umass-cs by csnet-relay.csnet id bf15674; 3 Jun 86 1:21 EDT Date: Mon, 2 Jun 86 18:38 EST From: ELIOT%cs.umass.edu@CSNET-RELAY.ARPA To: Common-Lisp@SU-AI.ARPA Subject: New FORMAT Features From: Rem@imsss ELIOT's proposed clear-to-EOL is ambiguous for bitmapped displays. Do you clear a swath from the bitmap equal in height to the current font height on the current line? I don't think there is any reasonable alternative. You might want text and graphics to be treated as separate planes that don't affect each other, but CL doesn't have any graphics capacity anyhow. (What is the current font if you have just done a cursorpos without printing anything?) The current font is the font that would be used to print the next character. Actually font changes are not really supported by CL, so this is moot. Printing a character with a non-zero font field doesn't nescessarilly change the font, does it? Does any existing implementation use the font bits of a character to do anything like this? Or do you clear only text that is exactly on the same line in the same font and not text from neighboring lines that slightly overlaps the current line? That would seem bizare, since presumably you want newly draw characters to be readable. Perhaps ELIOT wants this feature only for character-only displays or other devices that support only fixed-width fixed-height fonts in character arrays with non-overlapping character positions, or any window that is emulating a printing terminal with no graphics or cursor positionning (i.e. maybe cursorpos and erase-eol are mutually exclusive on bitmapped displays)? Cursorpos and erase-eol are not mutually exclusive. They interact in a funnny way if variable sized fonts are used, but that involves using non-CL features. A variable *width* font is OK if the cursorpos operation is defined in terms of the width and height of the SPACE character. (Required to be non-zero because of this.) You lose if you change from a 12 point to a 10 point font and expect line three to be in the same place. All of these features work together if you don't change fonts, which you can't currently do in CL, and they can work together anyhow if you are careful to leave enough space around everything. I think bit mapped displays are the only way to use a picture tube (I own a Macintosh and no TV) but I am also concerned about the use of dialup lines. I think it is a real shame that no one has made ZMACS work over the chaosnet or on dialup lines. Perhaps we should add enough primitives to CL to support the implementation of a portable screen editor (scroll-region and insert-delete character.) Actually I'd like to see somebody propose a simple set of user-interface primitives that handle all usual kinds of terminals/displays with and without multiple windows/panes. I would too. I would rather see a Flavor system and an error system and an iteration facility (loop macro). I think this simple set of features does most of what we need inexpensively. ELIOT's proposal is merely deficient, not in the right direction, probably...  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 00:27:23 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 2 Jun 86 21:20:41 PDT Received: ID ; Tue 3 Jun 86 00:19:54-EDT Date: Tue, 3 Jun 1986 00:19 EDT Message-ID: From: Rob MacLachlan To: mips!pachyderm.earl@su-glacier.arpa (Earl Killian) Cc: common-lisp@SU-AI.ARPA, glacier!franz!fimass!jkf@KIM.BERKELEY.EDU, ucbkim!Xerox.COM!miller.pa@SU-GLACIER.ARPA Subject: Some questions In-reply-to: Msg of 2 Jun 1986 21:49-EDT from mips!pachyderm.earl at su-glacier.arpa (Earl Killian) Date: Monday, 2 June 1986 21:49-EDT From: mips!pachyderm.earl at su-glacier.arpa (Earl Killian) Re: Some questions Unless, you're talking about self-recursive LABELS, note that according to a discussion on this mailing list long ago, it is not legal to implement (defun revappend (a b) (if (null a) b (revappend (cdr a) (prog1 a (setf (cdr a) b))))) with a branch back to the beginning of the code, because function calls are defined in terms of symbol-function of the symbol that happens to be the car of a form. I would be surprised if this was really the consensus. Tail-recursive self-calls are such a common idiom in many people's lisp styles that it is unreasonable to prohibit them turning into a branch. I think that a compiler should be allowed to do recursive calls however it pleases. Any optimization of this sort can be turned off by declaring the function NOTINLINE, forcing a full function call. Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 00:24:34 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 2 Jun 86 21:14:30 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 02 JUN 86 21:14:29 PDT Date: 2 Jun 86 20:53 PDT From: fischer.pa@Xerox.COM Subject: Re: long-char, kanji In-reply-to: "Scott E. Fahlman" 's message of Mon, 2 Jun 86 11:16 EDT To: common-lisp@SU-AI.ARPA Message-ID: <860602-211429-1134@Xerox> In Xerox Common Lisp we have char-bits-limit and char-fonts-limit set to zero, thus we don't use this feature and would lose no sleep over its passing. In fact, there might be some small rejoicing but that is beside the fact. From discussions here I understand that the whole problem of a programmer-friendly font system can be quite thorny. It seems the issue before the CL community is to tentatively agree on removing a random "wart" which was clearly a bad approach. Hopefully this will allow for a real solution to be discussed, perhaps as part of "Common Lisp ANSI standard extension XXX". Xerox has its own standard called NS Characters. Xerox Lisp, Viewpoint, Print servers, etc. have been using it happily for quite some time now. I'd like to present the XNS Character standard's tenets but I'm not deeply familar with them myself. Perhaps this message will stir one of our shy folks to speak up. (ron) Xerox AI Systems Palo Alto, California  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 3 Jun 86 00:08:47 EDT Received: from SU-GLACIER.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 20:58:37 PDT Received: by su-glacier.arpa with Sendmail; Mon, 2 Jun 86 20:59:20 pdt Received: from pachyderm.UUCP (pachyderm.ARPA) by mips.UUCP (4.12/4.7) id AA10903; Mon, 2 Jun 86 18:52:04 pdt Received: by pachyderm.UUCP (4.12/4.7) id AA22961; Mon, 2 Jun 86 18:49:56 pdt Date: Mon, 2 Jun 86 18:49:56 pdt From: mips!pachyderm.earl@su-glacier.arpa (Earl Killian) Message-Id: <8606030149.AA22961@pachyderm.UUCP> To: glacier!franz!fimass!jkf@kim.Berkeley.EDU Cc: ucbkim!Xerox.COM!miller.pa@su-glacier.arpa, common-lisp@su-ai.ARPA In-Reply-To: glacier!franz!fimass!jkf@kim.Berkeley.EDU's message of Fri, 30 May 86 09:50:10 PST Subject: Re: Some questions From: glacier!franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) To: ucbkim!Xerox.COM!miller.pa Cc: common-lisp@su-ai.arpa Subject: Re: Some questions >> Do you transform the source to continuation passing style before >> compiling? >> How happy are you with this decision? >> If you don't, do you do tail-recursion optimizations anyway? The Franz Inc. ExCL (Extended Common Lisp) compiler does not transform to a continuation passing style before compilation. It detects tail-recursive calls and eliminates some self-tail-recursive calls and will soon give the user the option of eliminating non-self tail-recursive calls on certain architectures. Unless, you're talking about self-recursive LABELS, note that according to a discussion on this mailing list long ago, it is not legal to implement (defun revappend (a b) (if (null a) b (revappend (cdr a) (prog1 a (setf (cdr a) b))))) with a branch back to the beginning of the code, because function calls are defined in terms of symbol-function of the symbol that happens to be the car of a form. Thus you could do (setf (symbol-function 'foo) (symbol-function 'revappend)) and it would still be expected to call whatever is in the symbol-function cell of revappend for recursive calls. I'm not sure I like this (at the timeI was suggesting that DEFUN ought to provide an implicit LABELS so that the self-recursion did the obvious thing), but that's what was decided. Of course, that doesn't mean it isn't possible to do tail-recursion elimination, as long as what you're doing is jump to the contents of the appropriate symbol-function cell value, as opposed to a simple branch. Since your message said you only currently did self-tail-recursion I'm assuming that's not what you're doing.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 21:36:18 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 2 Jun 86 18:31:22 PDT Received: from WHITE-BIRD.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 13986; Mon 2-Jun-86 21:28:35 EDT Date: Mon, 2 Jun 86 21:24 EDT From: Robert W. Kerns Subject: Common Lisp Subset To: common-lisp@SU-AI.ARPA, mathis@USC-ISIF.ARPA cc: a37078%ccut.u-tokyo.junet%utokyo-relay.csnet@CSNET-RELAY.ARPA Supersedes: <860602212316.3.RWK@WHITE-BIRD.SCRC.Symbolics.COM>, <860602212339.4.RWK@WHITE-BIRD.SCRC.Symbolics.COM> Message-ID: <860602212410.5.RWK@WHITE-BIRD.SCRC.Symbolics.COM> Date: Mon, 2 Jun 86 11:44 EST From: mike@a One major win here as a subset is that you can still allow tagbody and go. But the go targets must be known and local. There is no implicit "throw" caused by interactions of go-targets and lexical scoping. This is the only change to Common Lisp which I believe will make compilation significantly easier. I thought the purpose of a subset implementation was to make it run on smaller machines, not to make it easier to write the compiler. I would also assume that at least a minimal amount of compatibility would be desired. Eliminating lexical scoping eliminates an awful lot of programs and techniques. It also means you can't look at code and easily tell if it's legal; you have to expand every macro that uses variables to see if it works by turning into a lambda with a body. I also thought a major motivation for a subset language is the so-called "educational subset". Lexical scoping is the LAST thing I'd leave out of an educational subset. I would not leave object oriented programming out of the subset. Then perhaps it should be an option. If for example CommonLoops were adopted as a standard, I would take the subset which omits class "class". CommonLoops will allow programmers to extend the subset with the features that they need in a syntactically pleasing and upwardly compatible fashion. I.E., if the subset doesn't include complex numbers, but a user needs them, it is easy in commonLoops to overload the arithmetic and other operations to do type dispatch for complex numbers. The same is true for rationals, bignums, etc. etc. The goal is to strip out most of the built in types which can be constructed or provided as part of a library. Hash tables, fill-pointer'ed arrays, extended arithmetic types, etc. all fall in this category. Then programmers can use the object oriented feature to build their own implementations of the needed types. Agreed...if there is savings enough to make it worthwhile. (I don't really know). Finally, on the small machines for which this subset is intended, declarations are crucial to performance, so they should be allowed and exploited. Yes.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 18:47:05 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 2 Jun 86 15:36:41 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 13860; Mon 2-Jun-86 18:33:59 EDT Date: Mon, 2 Jun 86 18:34 EDT From: David A. Moon Subject: long-char, kanji To: common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860602183404.0.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Mon, 2 Jun 1986 11:16 EDT From: "Scott E. Fahlman" Suppose we were to change the standard to eliminate the Bit and Font fields in characters. (Such fields, along with other attributes such as "style", would be allowed as extensions, but Char-Bit and Char-Font would no longer be part of the standard language.) Would anyone be screwed by this? Would anyone even be unhappy about it? Symbolics uses CHAR-BITS and does not use CHAR-FONT. Since I don't see how a portable program could ever use CHAR-BITS, whose actual meaning is not defined by the language standard, I don't think it would hurt our users if CHAR-BITS were a language extension instead of a part of the standard language. The important thing is for the standard language to recognize that CHAR-CODE is not the only possible field, and provide the appropriate abstractions for dealing with characters in this light (for the most part, it does so now). I think you will find that there are some other implementations that do use CHAR-FONT, but I'll let them speak for themselves, if they wish to say anything.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 17:51:39 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 14:41:31 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2237; 2 Jun 86 17:41:05-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 21860; Mon 2-Jun-86 17:43:27-EDT Date: Mon, 2 Jun 86 17:40 EST Sender: mike@a To: Bill.Scherlis@C.CS.CMU.EDU from: mike%acorn@oak.lcs.mit.edu Subject: please Cc: common-lisp@SU-AI.ARPA Date: Mon 2 Jun 86 12:04:59-EDT From: Bill.Scherlis@C.CS.CMU.EDU If your mail header does not identify you or your mail address, please include it in your message. Who is "mike@a"? ------- Sorry about that. I am: Mike Beckerle Mike%acorn@oak.lcs.mit.edu Gold Hill Computers Cambridge, MA.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 13:08:35 EDT Received: from UTAH-20.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 10:01:25 PDT Date: Mon 2 Jun 86 10:59:31-MDT From: SANDRA Subject: extending format To: common-lisp@SU-AI.ARPA Message-ID: <12211639550.29.LOOSEMORE@UTAH-20.ARPA> Oh no! Still more obscure options to format! Gack! :-) Seriously, I think format is already much too complicated. Sure, you can get some amazing effects from all of those strange options, but at the expense of making your code totally incomprehensible to the casual reader. Cursor positioning and screen manipulation functions should be *functions*, and preferably "yellow pages" utilities rather than part of the language core. I might also point out that a high-level display manipulation package (a la curses) stands a much better chance of being portable between different devices than does a low-level protocol. Yes, one approach to portability is to reduce everything to a "lowest common denominator" and assume that every device can behave roughly like a dumb VT52, but if you do that you lose on efficiency since most devices these days are a whole lot smarter than a VT52. -Sandra Loosemore -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 13:06:25 EDT Received: from SCRC-QUABBIN.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 09:58:31 PDT Received: from FIREBIRD.SCRC.Symbolics.COM by SCRC-QUABBIN.ARPA via CHAOS with CHAOS-MAIL id 4927; Mon 2-Jun-86 12:58:31 EDT Date: Mon, 2 Jun 86 12:59 EDT From: David C. Plummer Subject: please To: Bill.Scherlis%C.CS.CMU.EDU@MIT-MC.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: <12211629622.23.SCHERLIS@C.CS.CMU.EDU> Message-ID: <860602125928.2.DCP@FIREBIRD.SCRC.Symbolics.COM> Date: Mon 2 Jun 86 12:04:59-EDT From: Bill.Scherlis@C.CS.CMU.EDU If your mail header does not identify you or your mail address, please include it in your message. Who is "mike@a"? Indeed! The Symbolics mail reading program parsed it into Allegheny, which is a local host! The recieved headers (as far as SU-AI) indicate he is at Gold-Hill: Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 08:45:53 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2218; 2 Jun 86 11:44:30-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 21783; Mon 2-Jun-86 11:46:54-EDT  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 13:05:57 EDT Received: from SU-CSLI.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 09:56:24 PDT Date: Mon 2 Jun 86 09:53:28-PDT From: Evan Kirshenbaum Subject: Re: long-char, kanji To: Fahlman@C.CS.CMU.EDU cc: common-lisp@SU-AI.ARPA, evan@SU-CSLI.ARPA In-Reply-To: Message from ""Scott E. Fahlman" " of Mon 2 Jun 86 08:47:35-PDT I was doing an editor a while back, and while I used normal strings for most of the lines, I used lists of full chars for modified lines. It was nice to be able to use the extra bits to mean things like "added since the last screen refresh" or "deleted since..." to help the display driver optimize. I haven't used fonts, but only because I wasn't on a machine that could do them. I've since moved to an Explorer, and I'm sure that now that I have them, I'll use them. Also, now that I *have* a computer with Control, Meta, Super and Hyper keys, I'd be very upset if I couldn't input any combination I can type as a character. My vote is for leaving at least bits (and probably font) in the language. As long as string-char is there when you need efficiency, I don't see what's wrong with having full chars as well. Evan Kirshenbaum evan@CSLI.STANFORD.EDU ...!ucbvax!decvax!decwrl!glacier!evan -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 12:12:30 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 2 Jun 86 09:05:06 PDT Received: ID ; Mon 2 Jun 86 12:05:00-EDT Date: Mon 2 Jun 86 12:04:59-EDT From: Bill.Scherlis@C.CS.CMU.EDU Subject: please To: common-lisp@SU-AI.ARPA Message-ID: <12211629622.23.SCHERLIS@C.CS.CMU.EDU> If your mail header does not identify you or your mail address, please include it in your message. Who is "mike@a"? -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 11:54:31 EDT Received: from MIT-LIVE-OAK.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 08:45:53 PDT Received: from GOLD-HILL-ACORN.DialNet.Symbolics.COM by MIT-LIVE-OAK.ARPA via DIAL with SMTP id 2218; 2 Jun 86 11:44:30-EDT Received: from BOSTON.Gold-Hill.DialNet.Symbolics.COM by ACORN.Gold-Hill.DialNet.Symbolics.COM via CHAOS with CHAOS-MAIL id 21783; Mon 2-Jun-86 11:46:54-EDT Date: Mon, 2 Jun 86 11:44 EST From: mike@a To: a37078%ccut.u-tokyo.junet%utokyo-relay.csnet@CSNET-RELAY.ARPA Subject: Common Lisp Subset Cc: common-lisp@SU-AI.ARPA, ida@UTOKYO-RELAY.CSNET, mathis@USC-ISIF.ARPA Date: Sat, 24 May 86 12:34:54+0900 From: Masayuki Ida A Common Lisp Subset Proposal Masayuki Ida Assoc.Prof, Aoyama Gakuin University Chair. Jeida Common Lisp Committee ..... I disagree mostly with this proposal. I think a good slim subset is important. However, I take the position that what we should leave out is lexical scoping. This does NOT mean we should regress to dynamic scoping, but that the subset allow only "local lexical" scoping. I.E., free vars in lambda's or defuns MUST be globals. Code which works under this restriction will always work under full lexical scoping. The subset interpreter should also have this scoping. One major win here as a subset is that you can still allow tagbody and go. But the go targets must be known and local. There is no implicit "throw" caused by interactions of go-targets and lexical scoping. This is the only change to Common Lisp which I believe will make compilation significantly easier. I would not leave object oriented programming out of the subset. If for example CommonLoops were adopted as a standard, I would take the subset which omits class "class". CommonLoops will allow programmers to extend the subset with the features that they need in a syntactically pleasing and upwardly compatible fashion. I.E., if the subset doesn't include complex numbers, but a user needs them, it is easy in commonLoops to overload the arithmetic and other operations to do type dispatch for complex numbers. The same is true for rationals, bignums, etc. etc. The goal is to strip out most of the built in types which can be constructed or provided as part of a library. Hash tables, fill-pointer'ed arrays, extended arithmetic types, etc. all fall in this category. Then programmers can use the object oriented feature to build their own implementations of the needed types. Finally, on the small machines for which this subset is intended, declarations are crucial to performance, so they should be allowed and exploited. ...mike  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 11:45:34 EDT Received: from HUDSON.DEC.COM by SU-AI.ARPA with TCP; 2 Jun 86 08:38:29 PDT Date: 2 Jun 86 11:25:00 EST From: "BIZET::ROBBINS" Subject: RE: FORMAT Features To: "common-lisp" Reply-To: "BIZET::ROBBINS" ---------------------Reply to mail dated 2-JUN-1986 01:18--------------------- >(2) Cursor positioning. The POINT on the screen can be set with >the directive "~n,m." (That is the "period" directive.) For >example (format t "~0,0.") puts the cursor in the home position. >(format t "~V,V." x y) puts the cursor at position X, Y. One problem >with this is that the period character is hard to read on some terminals >and listings. >If period is too hard to read the directive "~:%" might be used instead. >This makes sense because the ~% directive moves the cursor to the next >line. ~n,m:% moves the cursor to anyplace on the screen. The format directives ~! ~. ~I ~W ~_ are used by Dick Water's pretty printing system. VAX Lisp V2.0 includes this pretty printer and I think that other Lisps may be using it in the future. Please leave these directives alone. -- Rich Arpa: Robbins@Hudson.Dec.Com ------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 11:27:57 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 2 Jun 86 08:16:17 PDT Received: ID ; Mon 2 Jun 86 11:16:07-EDT Date: Mon, 2 Jun 1986 11:16 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: shebs%utah-orion@utah-cs.arpa (Stanley Shebs) Cc: common-lisp@SU-AI.ARPA Subject: long-char, kanji In-reply-to: Msg of 2 Jun 1986 10:37-EDT from shebs%utah-orion at utah-cs.arpa (Stanley Shebs) Didn't you do an informal survey a while back on who actually used the standardized bits and fonts in characters? I believe the consensus was that nobody used them, either because an implementation didn't want to bother, or because it was inadequate and had to be extended by those implementations that did want more bits in characters. Well, the question came up, and I don't remember anyone expressing any fondness for Char-Bit and Char-Font, but on this list you never know if silence means agreement or fatigue. Let's try it again: Suppose we were to change the standard to eliminate the Bit and Font fields in characters. (Such fields, along with other attributes such as "style", would be allowed as extensions, but Char-Bit and Char-Font would no longer be part of the standard language.) Would anyone be screwed by this? Would anyone even be unhappy about it? This is a survey, not a formal proposal, but the result may guide our discussions in the future. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 11:16:04 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 2 Jun 86 08:05:27 PDT Received: ID ; Mon 2 Jun 86 11:05:08-EDT Date: Mon, 2 Jun 1986 11:05 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Subject: Guidelines for the Standard This is in reply to Christopher Fry . First, lets not distribute the changes piecemeal, but rather as a big lump. Maybe this should be done every September? If the whole community is participating in the discussion of what changes to make, they will know what is coming. We can't very well dictate to the companies when their new releases, incorporating a particular set of changes, is to come out. Most companies can't even dictate that to themselves. However, changes should be grouped into discrete clusters, corresponding to proposals for a revision to the official standard. That way, a company can advertise a particular version of their Lisp as "corresponding to [proposed] ANSI/ISO Common Lisp 91" or whatever. Portable programs that run in one "Common Lisp 91" ought to run in all others. Some companies may want to track each change as it is decided in some internal version, and bring a new version to market as soon as the proposal for Common Lisp N is finalized; others may want to wait until the standard is officially approved. Presumably Common Lisp N-1 will still be available from the same company for some transition period. On the issue of wanting to change function names and argument orders for greater internal consistency, I hear you, but I believe that few of the vendors and major users would share your enthusiasm for such changes. I could be wrong about this (mail pro and con is welcome), but that is my current reading of the community. While one could in principle build a program updater that makes such changes automatically, there is always a lot of hassle during the transition, as some programs get updated and others escape. One could also argue (weakly) that this language is doomed to a certain amount of inconsistency in naming and argument order. Any set of rules you establish for this is going to break down in certain places. (-: By the way, how else would you spell RPLACA? We kept this around mostly as a historical monument, and any tampering would amount to desecration. It does an ugly thing, so it should be ugly. Only wimps use Setf of Car. But we artists are allowed to differ in matters of aesthetic judgement. :-) -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 10:46:01 EDT Received: from UTAH-CS.ARPA by SU-AI.ARPA with TCP; 2 Jun 86 07:36:54 PDT Received: by utah-cs.ARPA (5.31/4.40.2) id AA06254; Mon, 2 Jun 86 08:37:41 MDT Received: by utah-orion.ARPA (5.31/4.40.2) id AA28604; Mon, 2 Jun 86 08:37:37 MDT Date: Mon, 2 Jun 86 08:37:37 MDT From: shebs%utah-orion@utah-cs.arpa (Stanley Shebs) Message-Id: <8606021437.AA28604@utah-orion.ARPA> To: Fahlman@c.cs.cmu.edu, common-lisp@su-ai.arpa Subject: Re: long-char, kanji News-Path: utah-cs!@SU-AI.ARPA:FAHLMAN@C.CS.CMU.EDU References: Didn't you do an informal survey a while back on who actually used the standardized bits and fonts in characters? I believe the consensus was that nobody used them, either because an implementation didn't want to bother, or because it was inadequate and had to be extended by those implementations that did want more bits in characters. stan  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 04:48:00 EDT Received: from REAGAN.AI.MIT.EDU by SU-AI.ARPA with TCP; 2 Jun 86 01:40:38 PDT Received: from MOSCOW-CENTRE.AI.MIT.EDU by REAGAN.AI.MIT.EDU via CHAOS with CHAOS-MAIL id 34396; Mon 2-Jun-86 04:41:22-EDT Date: Mon, 2 Jun 86 04:41 EDT From: Christopher Fry Subject: Guidelines for the Standard To: Fahlman@C.CS.CMU.EDU, common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860602044120.6.CFRY@MOSCOW-CENTRE.AI.MIT.EDU> The existence of a large and fast-growing user community cuts two ways. On the one hand, for every change that we consider, we must think about not only the merits of making the change, but also the cost in terms of code that must be changed and users who must be retrained. First, lets not distribute the changes piecemeal, but rather as a big lump. Maybe this should be done every September? I can imagine many changes to CL that would make the language easier to use for BOTH experienced and new users. The spelling of RPLACA is my favorite example. I'm afraid Scott would call these "aesthetic". Well, you can acuse me of being an artist. As far as changing code, we can provide programs and data that pretty much do the right thing to source code. There will always be glitches, of course, but things like changing the spelling of a function and changing arg order can be done fairly reliably with a small program and a set of data to specify the changes. Coral recently made a major name change in our underlying software. I wrote a function which took as args, a list of source files, an alist of old-new name pairs, and a list of need-to-be-looked-at symbols. The program creates new files that are the same as the old ones with all symbols of old changed to new, and appends "???" to the end of need-to-be-looked-at symbols. After the automatic conversion, I just searched for "???" and did the necessary editing. "???" was an argument, as was a suffix to be put on the names of the newly created files. A smarter program could distinquish symbols in function calling syntax from others, recognize obsolete keywords, find certain reader macros, etc. All these tricks are designed to reduce the number of need-to-be-looked-at contructs. Such a converter program and its annual data could be kept in the yellow pages. For obscure purposes, a converter could even go backwards using the same data. CL could be a lot more consistent with itself. Like, for instance, all list operators could take their primary list argument as the first argument. Same thing for string, array and sequence operators. A not-very-clever text modifier could even change CLtL with regards to function spelling and arg order. Scott is correct to point out that a constantly changing standard is not a standard, and we do need a standard. But a more consistent language would be easier to formally standardize, learn, and remember. Ease of learning and remembering are key to making any standard widely used. Any mods will initially cost people currently involved with CL time. The issue is, how long will it take them to pay for themselves in people-hours versus the life-expectancy of the OLD CL? If CL is expected to grow, we must count the hours of the yet-to-be CL hackers in the equasion.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 04:11:06 EDT Received: from REAGAN.AI.MIT.EDU by SU-AI.ARPA with TCP; 2 Jun 86 00:58:44 PDT Received: from MOSCOW-CENTRE.AI.MIT.EDU by REAGAN.AI.MIT.EDU via CHAOS with CHAOS-MAIL id 34394; Mon 2-Jun-86 03:59:08-EDT Date: Mon, 2 Jun 86 03:59 EDT From: Christopher Fry Subject: long-char, kanji To: Fahlman@C.CS.CMU.EDU, common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860602035907.5.CFRY@MOSCOW-CENTRE.AI.MIT.EDU> In trying to formulate an international standard for Common Lisp, we clearly need to deal with this issues of extended character sets. I'm assuming that 16 bits of character code is enough to meet everyone's needs -- is that naive? How many thousand characters are necessary for Japanese, and are these the same as the thousands needed for Chinese? Are there other languages in the computer-using world that have non-phonetic alphabets with thousands of characters? I was given a demo at BITSTREAM [in Cambridge] by Phil Appley [spelling?] on their 3600 last week. Their business is characters and fonts. Their "character set" is about 5K characters. Presumably most of their fonts don't even represent most of those characters, but all of the chars are represented by some font or other that they're working on. I don't know if they work on asian languages, but I suggest that they get to at least review Common Lisp character standards if a change is comming. They could probably make some insightful comments about character representation. I can't find a net address for Phil, though he use to be at MIT. Symbolics must have formal contact with BITSTREAM. RLB at scrc has done a buncn of work with fonts so lets ask him too. ------- Since people are thinking about fat characters, here's some news from Macintosh. The Mac allows users to change characters in editors via menu, displaying WYSIWYG instantly. The INDEPENDENT attributes of a character are : font size bold italic outline shadow [a 3d effect] Thus any character can have any number of these. To this set I'd like to add 2 other attributes which are very useful in "editing" text in the newspaper sense: underline strike-out (each char has a horizontal line through its middle. A whole word made up of strike-out style chars is typically still readable, but looks crossed out. I think the idea came from PARC). [wouldn't you love to be able to STRIKE-OUT bad code, yet leave it visable as a negative example?] Note that the last 6 of these require just 1 bit each. Color is yet another issue. And grey-scaled [dejagged] fonts might bring up some other issues, particularly with regards to displaying on machines with different numbers of bits per pixel. I'm not suggesting that CL standardize on all of this. I am suggesting that the CL standard makes it not too painful for inovators to integrate such fancy stuff in their system which has CL as a base.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 02:03:07 EDT Received: from IMSSS by SU-AI with PUP; 01-Jun-86 22:58 PDT Date: 1 Jun 1986 2254-PDT From: Rem@IMSSS Subject: oops To: COMMON-LISP@SU-AI That last sentence should read: ELIOT's proposal is merely deficient, rather than not in the right direction, probably... -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 01:57:34 EDT Received: from IMSSS by SU-AI with PUP; 01-Jun-86 22:53 PDT Date: 1 Jun 1986 2253-PDT From: Rem@IMSSS Subject: CURSORPOS etc. directives To: COMMON-LISP@SU-AI ELIOT's proposed clear-to-EOL is ambiguous for bitmapped displays. Do you clear a swath from the bitmap equal in height to the current font height on the current line? (What is the current font if you have just done a cursorpos without printing anything?) Or do you clear only text that is exactly on the same line in the same font and not text from neighboring lines that slightly overlaps the current line? Perhaps ELIOT wants this feature only for character-only displays or other devices that support only fixed-width fixed-height fonts in character arrays with non-overlapping character positions, or any window that is emulating a printing terminal with no graphics or cursor positionning (i.e. maybe cursorpos and erase-eol are mutually exclusive on bitmapped displays)? Actually I'd like to see somebody propose a simple set of user-interface primitives that handle all usual kinds of terminals/displays with and without multiple windows/panes. ELIOT's proposal is merely deficient, not in the right direction, probably... -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 2 Jun 86 01:11:13 EDT Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 1 Jun 86 22:06:30 PDT Received: from umass-cs by csnet-relay.csnet id ah05901; 2 Jun 86 1:06 EDT Date: Sun, 1 Jun 86 16:57 EST From: ELIOT%cs.umass.edu@CSNET-RELAY.ARPA To: Common-Lisp@SU-AI.ARPA Subject: FORMAT Features I have been anoyed that Common Lisp has no screen manipulation primitives. Clearly it would be nice to have a general set of Window functions, but it would also be extremely difficult to do right. I think that something like the following new FORMAT features would prove to be useful. Basically I am proposing to put some of the features of Maclisp's CURSORPOS function into FORMAT. (1) Clear-Screen. The ~| directive used to clear the screen (window) on some CL implementations, but last time I tried it I got PAGE in a lozenge instead. While this behavior is useful for viewing files it is not always desirable. If it is undesirable to change the behavior of ~| (again) then the ~:| directive could be used instead. If the output stream is a terminal or window then this directive clears it. Otherwise it "outputs a page separator character". The following directives signal an error if the output stream cannot support the operation. (2) Cursor positioning. The POINT on the screen can be set with the directive "~n,m." (That is the "period" directive.) For example (format t "~0,0.") puts the cursor in the home position. (format t "~V,V." x y) puts the cursor at position X, Y. One problem with this is that the period character is hard to read on some terminals and listings. If period is too hard to read the directive "~:%" might be used instead. This makes sense because the ~% directive moves the cursor to the next line. ~n,m:% moves the cursor to anyplace on the screen. (3) Clear-Rest-Of-Line. The directive ~J clears all characters to the end of the line. With a precomma argument is clears that many characters, starting at the cursor. Alternatively this function could be assigned to "~:&". This makes sense if one thinks of the fresh-line operation as being primarilly to put the cursor before a blank line. The colon can be treated as suppressing the possible preceeding carrige return. In my experience these three functions can handle 95% of the screen manipulations that are needed. I would also eliminate most of the waffling in the description of the ~T tabulation directive. Implementations should just be required to keep track of where the cursor is. For files the implementation should keep track of how many characters have been output since the last newline. I can't believe its really that hard. The directive doesn't really serve any useful purpose if people can't figure out what it is going to do. Each implementation still needs its own way of determining what type of terminal is being used. Chris Eliot ELIOT@UMASS (CSNET) CRE@MIT-MC (ARPA)  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 1 Jun 86 19:01:20 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 1 Jun 86 15:55:46 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 01 JUN 86 15:54:55 PDT Date: 1 Jun 86 15:54 PDT Sender: Bobrow.pa@Xerox.COM Subject: Re: Are isomorphic structures EQUAL? In-reply-to: Jonathan A Rees 's message of Thu, 29 May 86 14:23 EDT To: JAR@ai.ai.mit.edu cc: Moon@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA From: Bobrow.pa@Xerox.COM (Danny Bobrow) Message-ID: <860601-155455-1729@Xerox> "I would mildly object to having [EQUAL] recursively compare components because that seems to be a violation of the data abstraction capability that structures are supposed to provide." One of the proposed uses for a multi-methods in CommonLoop was to allow methods that declared themselves as -inside- both structures, and have the system ensure that it was called in just such a case. It would seem that EQUAL and others are good candidates for such methods. "I would like to change the language so that the the type of structures (whose DEFSTRUCT doesn't use :TYPE) is disjoint from other types." I agree on the utility of providing easy mechanisms to suport disjointness. Are there implementations out there that do not support opaque types? ----- dgb:  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 31 May 86 21:19:08 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 31 May 86 18:10:53 PDT Received: ID ; Sat 31 May 86 21:10:38-EDT Date: Sat, 31 May 1986 21:10 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: "Robert W. Kerns" Cc: common-lisp@SU-AI.ARPA Subject: long-char, kanji Didn't I already suggest how to avoid paying a penalty for Japanese the last time this topic came up? Instead of a two-way partitioning of the type space, have a three-way partitioning. ``PLUMP-STRING'' uses 16 bits to represent the characters, with default style. I've got no problem with this, but it seems to be what Moon was arguing against earlier (except that he expanded it to a four-way partitioning and then decided that THAT was too complex). ...Please don't quote CLtL at us, we know very well what the book says. We consider this portion of the book to be very poorly thought out, and not suitable as a guide. Preserving the status quo here would be a mistake. Let's not use the book as a substitute for doing language design. First, the reason I quoted the manual at Moon was that I read his note as saying that any vector of type character was obviously a string. It sounded like he was confused about the current state of things. It is not uncommon for people whose implementations have lots of extensions to forget where Common Lisp leaves off and the extensions begin. Even if Moon was not confused himself (his note can be read either way), other readers might be. When I see something like that go by, I feel that it is important to flag the problem before the possible confusion spreads any farther. If Moon's note had clearly indicated that he was proposing a change or extension to our current definition of string, then I wouldn't have quoted the book at him. (I would, however, have wondered how you use the double-quote syntax to handle characters with bits that have no compact printable representaiton, and characters with font attributes that are perhaps not printable on the machine where the I/O is occurring.) For reasons indicated in my "proposed guidelines" message, I think that we must start with the status quo, regardless of how stupid you or I might think that some part of it might be. Proposals to change the language spec are in order, but they must be very clearly labeled as such, and the costs to users and implementors must be considered as well as the benefits of making a change. I guess it does no harm for Symbolics to extend Strings to hold all kinds of characters (if the extension is internally consistent), as long as you don't use this in portable code and as long as you don't let this extension contaminate any function in the LISP package...but that's another discussion. I also agree that the current definition of characters, with their bit and font attributes, is a total mess, but it's one that we can live with. I'd love to make an incompatible change here and clean everything up, but we have to move very carefully on such things. There's a lot of code out there that might be affected. We should probably begin with a survey of who would be screwed if char-bits and char-fonts went away. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 31 May 86 13:38:38 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 31 May 86 10:28:42 PDT Received: from WHITE-BIRD.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 12712; Sat 31-May-86 13:26:22 EDT Date: Sat, 31 May 86 13:21 EDT From: Robert W. Kerns Subject: long-char, kanji To: common-lisp@SU-AI.ARPA cc: David A. Moon In-Reply-To: Supersedes: <860531131604.5.RWK@WHITE-BIRD.SCRC.Symbolics.COM>, <860531131735.6.RWK@WHITE-BIRD.SCRC.Symbolics.COM>, <860531131827.7.RWK@WHITE-BIRD.SCRC.Symbolics.COM> Message-ID: <860531132140.9.RWK@WHITE-BIRD.SCRC.Symbolics.COM> Date: Sat, 31 May 1986 00:20 EDT From: "Scott E. Fahlman" True, but my guess is that few implementations will choose to add such a thing. I think our current view at CMU (Rob will correct me if I'm wrong) is that highlighting and the other things you do with "styles" is better accomplished with some sort of external data structure that indicates where the highlighting starts and stops. I strongly disagree that this is "better". It might be "necessary" in some environment or other, but I could argue that you're trying to represent too much text in one string if it's really a problem for you. The point here is that one of the main purposes of the string data-type is to hold text, not arbitrary sequences of characters. Character styles are primarily part of text, not just some add-on highlighting that's part of the presentation of that text. As part of text, they can appear in text in editor buffers, in text files, in source files, etc. True, sometimes they are part of the presentation, say for highlighting a selected element in a menu. And indeed, in cases like this, we supply the character-style information separately as either data-structure or program-state. It seems wasteful to do this on a per-character basis, and even more wasteful to tax every character (or even just every Japanese character) with a field to indicate possible style modification. We wouldn't make it illegal to do this, but many implementations will go for the 2x compactness instead. It's not for all characters, unless you put all your text in one string, which is not a very good implementation technique, especially in an editor. Didn't I already suggest how to avoid paying a penalty for Japanese the last time this topic came up? Instead of a two-way partitioning of the type space, have a three-way partitioning. ``PLUMP-STRING'' uses 16 bits to represent the characters, with default style. Most implementations would probably NOT more than one character datatype to implement any of these schemes, since even the hairy characters would be an immediate datatype, but there would be a ''PLUMP-CHARACTER'' type, consisting of those characters which fit in PLUMP-STRINGs. I believe I already explained how to use 16 bits to represent multiple languages. Of course, this name isn't adaquate; it should be named something which reflects the fact that this is single-language. More useful in the probably more common case of single-language systems would be the equivalent technique applied to things which are all of the same character-set (i.e. language) but with various styles. Again, this would typically be a 16-bit-per-character representation, although some might choose to do it with fewer. As I read the manual, Common Lisp strings are not now allowed to contain any characters with non-zero bit and font attributes. Arbitrary characters can be stored in vectors of type Character, which are not Strings and do not print with the double-quote notation. This means they are useless, or nearly so. Please don't quote CLtL at us, we know very well what the book says. We consider this portion of the book to be very poorly thought out, and not suitable as a guide. Preserving the status quo here would be a mistake. Let's not use the book as a substitute for doing language design. (I do consider compatibility to be a language design issue. Let me assert that I believe compatibility is not a real problem here. If you disagree with this, please give arguments other than just "status quo"). I am just suggesting that we preserve this staus quo: the name String might be extended to include Fat-String (in the narrow sense of Fat-String defined above) but not to include vectors of arbitrary characters. The only way we're not in compliance is that we allow storing of characters with non-zero bits in strings (and files). I don't see how this can be a problem for any legal CL program. The issue here is one of type hierarchy. Remember, MAKE-ARRAY with :ELEMENT-TYPE (ARRAY CHARACTER) is allowed to give you back any kind of array that can hold characters. (i.e. any supertype of character, including T). Similarly, :ELEMENT-TYPE (ARRAY FAT-STRING-CHAR) is allowed to give you back an array that can hold characters with bits as well. Nowhere else in CLtL do we forbid implementations from allowing additional types to be stored in arrays. Especially note how the :ELEMENT-TYPE argument to OPEN is defined, with explicit mention of :ELEMENT-TYPE CHARACTER as storing "any character". I think the intent was to not require implementations to support putting arbitrary characters in strings, not to forbid them from doing so. Note that I'm not advocating that putting characters with bits in strings is really a good idea. If you want my honest opinion, I don't think they should be of type CHARACTER at all. I think they're more logically of type INPUT-GESTURE, and may include various other bits of information, like whether you hit the key with a feather or a hammer, or where you drew the letter 'L' on the pad, or perhaps it wasn't a letter, but a key on the synthesizer you keep next to your console. But the rest of CL does not get into the business of forbidding extensions. Let's not do it here, either. I see no reason why CL has to forbid the inclusion of, say, diagrams in strings.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 31 May 86 01:43:50 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 22:35:21 PDT Received: ID ; Sat 31 May 86 01:35:11-EDT Date: Sat, 31 May 1986 01:35 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Cc: fahlman@C.CS.CMU.EDU Subject: Guidelines for the Standard Before we start discussing specific technical issues, we should discuss in a general way what we want to achieve by this process. In particular, we need to think about how much we are willing to change the language from what is described in "Common Lisp the Language". I hope we won't get bogged down in this discussion, but if we can set up some explicit guidelines for ourselves now, it may eliminate a lot of confusion about our goals later on, while we are working on technical issues. I say "guidelines" because there is no way we can agree to a set of strict and binding rules at this point. But at least if we have a set of guidelines, the burden will be on the person who argues that we should violate them in some specific case. I have heard a variety of opinions on the issue of how closely we should adhere to CLtL. Some have argued that the existing Common Lisp is so terribly flawed that it should not become a standard; we would be better off starting over. Others have argued that in the first attempt to standardize a Lisp under ANSI and ISO, we should not deviate at all, in incompatible ways, from the language as described in Steele. These people argue that we should get a stable standard in place, and then do any necessary tuning in subsequent versions. I think that most of us are somewhere between these two extremes. Let me set forth my own view of what our guidelines should be, which I think a lot of people share. Comments are welcome. -- Scott --------------------------------------------------------------------------- Despite its imperfections, Common Lisp is already a standard. Nearly every U.S. company with a presence in the AI market has announced some sort of support for Common Lisp (though not always to the exclusion of supporting other Lisps). There is similar interest in Japan among those companies not totally committed to Prolog. It is now our goal to turn Common Lisp from a de facto standard into an official one. In the process, we have the opportunity to clarify those things that are currently ambiguous (and which therefore weaken the standard), to fix some problems that defeat the purpose of the standard (e.g. problems that make it hard to write portable code), and to finish defining some essential parts of the language that we left unfinished in the rush to complete CLtL. There are a number of implementations on the market, and many more in the pipeline. There are many hundreds of active users, and this number is growing very fast. A number of large software systems and AI toolkits have been ported to Common Lisp, and again this is an accelerating trend. All of this says that there is a very large investment in the existing language; by the time a standard could be approved, this investment will probably have doubled. The existence of a large and fast-growing user community cuts two ways. On the one hand, for every change that we consider, we must think about not only the merits of making the change, but also the cost in terms of code that must be changed and users who must be retrained. On the other hand, if a change is unavoidable, the sooner we make it, the smaller the cost. Different kinds of changes have different kinds of costs: Compatible extensions cost the users nothing (except that they make an already complex language more complex). The cost is to the various Common Lisp implementors who have to put the extension into their respective products. If the extension is rather small, or if it will be easy to implement (perhaps because someone supplies a public-domain implementation of the extension), then we can consider the proposal on its merits alone. If the extension requires some significant implementation to make a lot of internal changes, then the threshold is considerably higher. In the case of true ambiguities (where implementors really have chosen divergent interpretations, and not just where some clever fellow can find a loophole in the language of CLtL), it is generally worthwhile to make a clear choice, even though one group or another is going to have to fix things. In some cases it will be appropriate to explicitly allow both interpretations, but not where this tends to interfere with portability of code. Changes that affect a lot of existing user code should not be made unless there are VERY strong reasons for doing so. Changes that would break things in subtle ways are the worst -- something like changing the type of NIL, for example, would break all sorts of things. Changes to particular functions, which can be searched for by name and fixed in straightforward ways, are not quite so bad. Changes to obscure corners of the language that only concern a small minority of users are not so bad, especially if the change benefits that small community and is favored by them. For example, if there's some subtle issue about roundoff that should be changed in order to make the number crunchers happy, we could consider this. In general, changes that affect only the implementors are less costly than changes that affect all sorts of user code; there are fewer groups that have to make changes, and they have more resources for doing so. We must not overburden the implementors with changes or it will cause delays and disruptions, and it could conceivably lead to a revolt, but my guess is that a couple of dozen changes, each of which takes a person-day or two to implement, would not be resented if the changes are made for good reason. From the time a change is approved until the time it becomes part of some Official Standard will be on the order of a year. That's plenty of time to make the changes and to build and test a new version, even for the most ponderous of companies. There's a lot of interest in improving code portability, settling on a workable error system, and trying to agree on an object-oriented programming facilty (or foundation for one). If any of these goals requires some change, that probably counts as a good reason. At this point, mere aesthetic considerations are not sufficient to justify an incompatible change.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 31 May 86 00:28:50 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 21:21:54 PDT Received: ID ; Sat 31 May 86 00:20:53-EDT Date: Sat, 31 May 1986 00:20 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: "David A. Moon" Cc: common-lisp@SU-AI.ARPA Subject: long-char, kanji In-reply-to: Msg of 30 May 1986 23:44-EDT from David A. Moon First, we could alter the type hierarchy as Moon suggests, and begin to encourage implementations to exercise their right to have zero-length font and bit fields in characters. A lot of us have come to feel that these were a major mistake and should begin to disappear. (We wouldn't legislate these fields away, just not implement them and encourage code developers not to use them.) An implementation that does this can have Fat-Strings with 16-bits per char, all of it Char-Code. This would be fine. The only problem is that if the implementation later wants to add character styles, it has to double the width of fat-strings or add a third type of string. True, but my guess is that few implementations will choose to add such a thing. I think our current view at CMU (Rob will correct me if I'm wrong) is that highlighting and the other things you do with "styles" is better accomplished with some sort of external data structure that indicates where the highlighting starts and stops. It seems wasteful to do this on a per-character basis, and even more wasteful to tax every character (or even just every Japanese character) with a field to indicate possible style modification. We wouldn't make it illegal to do this, but many implementations will go for the 2x compactness instead. Alternatively, we could say that Fat-Char is a subtype of Character, with Char-Bit and Char-Font of zero. String-Char is a subtype of Fat-Char, with a Char-Code that fits (or can be mapped) into eight bits. A Thin-String holds only characters that are of type String-Char. A Fat-String holds Fat-Chars (some of which may also be String-Chars). If you want a vector of characters that have non-zero bits and fonts, then you use (Vector Character). I'm not sure what we do with the String type-specifier; the two reasonable possibilities are to equate it to Thin-String or tow the union of Thin and Fat Strings. I take it the way this differs from your first alternative is that there are three subtypes of character and three subtypes of string, and you propose to name the additional types CHARACTER and (VECTOR CHARACTER). I don't think that's viable. The informal definition of STRING is anything that prints with double-quotes around it. Surely any one dimensional array of characters should qualify as a string. I don't think it makes sense to use the name STRING for a specialized subtype of (VECTOR CHARACTER) and have a different name for the general thing; I think it's always cleaner to use the short name for the general thing and qualified names for the specializations of it. Surely using STRING to mean the union of thin and fat strings, excluding extra-fat strings, would be confusing. As I read the manual, Common Lisp strings are not now allowed to contain any characters with non-zero bit and font attributes. Arbitrary characters can be stored in vectors of type Character, which are not Strings and do not print with the double-quote notation. I am just suggesting that we preserve this staus quo: the name String might be extended to include Fat-String (in the narrow sense of Fat-String defined above) but not to include vectors of arbitrary characters. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 23:52:44 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 30 May 86 20:46:52 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 12658; Fri 30-May-86 23:44:36 EDT Date: Fri, 30 May 86 23:44 EDT From: David A. Moon Subject: long-char, kanji To: common-lisp@SU-AI.ARPA In-Reply-To: Message-ID: <860530234421.0.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Fri, 30 May 1986 22:41 EDT From: "Scott E. Fahlman" .... The Symbolics spec, as described by Moon, meets these goals. However, he says that Fat-Char and String-Char form an EXHAUSTIVE partition of Character. I'd say that's just an implementation detail. I didn't mean to imply that it should not be legal to introduce a third subtype of CHARACTER. I agree with the goals, by the way. This means that if an implementation supports any Char-Bit or Char-Font bits, the fat strings must be able to accommodate these, in addition to the longer Char-Code field. Since the Char-Code will typically be 16 bits, it would be nice to be able to store just the char-code in a fat string, and not make the big jump to 32 bits per character, which is the next stop for most stock-hardare machines. We considered this (and in fact partially implemented it at one time) but felt that for our implementation the savings of storage did not justify the extra complexity of having three subtypes of STRING instead of two. (Actually, as it turns out there would be four subtypes: 8-bit, 32-bit, and two 16-bit subtypes, depending on whether the character has no style or has a small code -- both special cases occur with approximately equal frequency). I can easily understand that another implementation that had less memory available and was more willing to accept extra complexity might make this design decision the other way. Having extra subtypes of CHARACTER is no problem, because just as with fixnums and bignums the user never sees them, but any user who modifies the contents of strings has to think about extra subtypes of STRING. I really don't know how to deal with four subtypes of STRING in a language standard. I'm sure you stock-hardware people will jump all over me if I suggest that SETF of CHAR should automatically change the representation of the string if it isn't wide enough for the character to fit, to make the subtypes essentially invisible to users. Perhaps we could take a hint from Common Lisp floating-point numbers, but I doubt that that analogy is very helpful. Two solutions are possible: First, we could alter the type hierarchy as Moon suggests, and begin to encourage implementations to exercise their right to have zero-length font and bit fields in characters. A lot of us have come to feel that these were a major mistake and should begin to disappear. (We wouldn't legislate these fields away, just not implement them and encourage code developers not to use them.) An implementation that does this can have Fat-Strings with 16-bits per char, all of it Char-Code. This would be fine. The only problem is that if the implementation later wants to add character styles, it has to double the width of fat-strings or add a third type of string. Alternatively, we could say that Fat-Char is a subtype of Character, with Char-Bit and Char-Font of zero. String-Char is a subtype of Fat-Char, with a Char-Code that fits (or can be mapped) into eight bits. A Thin-String holds only characters that are of type String-Char. A Fat-String holds Fat-Chars (some of which may also be String-Chars). If you want a vector of characters that have non-zero bits and fonts, then you use (Vector Character). I'm not sure what we do with the String type-specifier; the two reasonable possibilities are to equate it to Thin-String or tow the union of Thin and Fat Strings. I take it the way this differs from your first alternative is that there are three subtypes of character and three subtypes of string, and you propose to name the additional types CHARACTER and (VECTOR CHARACTER). I don't think that's viable. The informal definition of STRING is anything that prints with double-quotes around it. Surely any one dimensional array of characters should qualify as a string. I don't think it makes sense to use the name STRING for a specialized subtype of (VECTOR CHARACTER) and have a different name for the general thing; I think it's always cleaner to use the short name for the general thing and qualified names for the specializations of it. Surely using STRING to mean the union of thin and fat strings, excluding extra-fat strings, would be confusing. Another solution that should be permitted by the language is to have only one representation for strings, which is fat enough to accomodate all characters. In some environments the frequency of thin strings might be low enough that the storage savings would not justify the extra complexity of optimizing strings that contain only STRING-CHARs. Stepping back a bit, what we have is an implementation-dependent spectrum of subtypes of STRING. We need names for the most general, which can hold any CHARACTER, and the least general, which is only required to be able to hold STANDARD-CHARs. In addition, we need a generic way to select from among implementation-dependent in-between types, if there are any. If you think Common Lisp should go this far, some deep thought is in order. For those of us without microcoded type-dispatch, Simple-String is a very important concept. It says that you can access the Nth character in the string simply by indexing off the address of the string by N bytes (maybe adding some fixed offset). On a lot of machines that is one or two instructions, and no conditionals. If Simple-Strings can be either fat or thin, then you have to make a runtime decision about whether to index by N bytes or 2N bytes. So it is best to reserve Simple-String for simple thin strings and maybe add another type for Simple-Fat-String. I see. Let's put this in the manual next time around.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 22:53:26 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 19:50:04 PDT Received: ID ; Fri 30 May 86 22:49:46-EDT Date: Fri, 30 May 1986 22:49 EDT Message-ID: From: Rob MacLachlan To: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece) Cc: COMMON-LISP@SU-AI.ARPA Subject: Defstruct default values In-reply-to: Msg of 30 May 1986 11:43-EDT from preece%ccvaxa at gswd-vms.ARPA (Scott E. Preece) Date: Friday, 30 May 1986 11:43-EDT From: preece%ccvaxa at gswd-vms.ARPA (Scott E. Preece) To: COMMON-LISP at su-ai.arpa Re: Defstruct default values Maybe I'm missing something, but the book says that the default-init is evaluated =each time= a structure is to be constructed [emphasis per CLtL]. This is as opposed to being evaluated once at macroexpand time. Although the manual doesn't say that the default expression isn't evaluated if the slot value is supplied to the contructor, it seems pretty pointless to evaluate the default when it isn't used. Make that another clarification. My message was prompted by the discovery that a certain implementation had taken it upon itself to complain if the default was a constant and of the wrong type. Due to the dynamic typing in Common Lisp, it is not erroneous to have an expression which is type incorrect as long as the expression is never evaluated. It is reasonable for a compiler to warn about such probable type errors, but the defstruct default seems to me to be a special case, since syntactic bogosity requires people to specify spurious defaults. Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 22:51:43 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 19:41:42 PDT Received: ID ; Fri 30 May 86 22:41:34-EDT Date: Fri, 30 May 1986 22:41 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Cc: fahlman@C.CS.CMU.EDU Subject: long-char, kanji In-reply-to: Msg of 30 May 1986 16:58-EDT from David A. Moon I like a lot of what Moon described, with a few reservations. Let me first describe what I see as the requirements: In trying to formulate an international standard for Common Lisp, we clearly need to deal with this issues of extended character sets. I'm assuming that 16 bits of character code is enough to meet everyone's needs -- is that naive? How many thousand characters are necessary for Japanese, and are these the same as the thousands needed for Chinese? Are there other languages in the computer-using world that have non-phonetic alphabets with thousands of characters? I think that we should define some notion of fat characters and the strings to hold them, and make sure that these are considered in all the appropriate places as we write the rest of the spec. Fat characters and strings should be an optional language feature: an implementation does not have to support these, but if it does, it should do it in the standard way. (We can specify some marker that is put on the *features* list if and only if fat characters are supported.) I assume that any Lisp that does not support fat characters will not do well in the Japanese market, so there's plenty of incentive for big companies to support this feature. The specification of fat characters must be done in such a way that currently legal implementations that do not support them can be left as is; implementations that do support them must be able to do so without penalizing users of normal non-fat strings, either in speed or storage space. The Symbolics spec, as described by Moon, meets these goals. However, he says that Fat-Char and String-Char form an EXHAUSTIVE partition of Character. This means that if an implementation supports any Char-Bit or Char-Font bits, the fat strings must be able to accommodate these, in addition to the longer Char-Code field. Since the Char-Code will typically be 16 bits, it would be nice to be able to store just the char-code in a fat string, and not make the big jump to 32 bits per character, which is the next stop for most stock-hardare machines. I don't know how important people feel this is, but if I were storing lots of Japanese text in some application, I think I'd object to a 2X bloat factor. Two solutions are possible: First, we could alter the type hierarchy as Moon suggests, and begin to encourage implementations to exercise their right to have zero-length font and bit fields in characters. A lot of us have come to feel that these were a major mistake and should begin to disappear. (We wouldn't legislate these fields away, just not implement them and encourage code developers not to use them.) An implementation that does this can have Fat-Strings with 16-bits per char, all of it Char-Code. Alternatively, we could say that Fat-Char is a subtype of Character, with Char-Bit and Char-Font of zero. String-Char is a subtype of Fat-Char, with a Char-Code that fits (or can be mapped) into eight bits. A Thin-String holds only characters that are of type String-Char. A Fat-String holds Fat-Chars (some of which may also be String-Chars). If you want a vector of characters that have non-zero bits and fonts, then you use (Vector Character). I'm not sure what we do with the String type-specifier; the two reasonable possibilities are to equate it to Thin-String or tow the union of Thin and Fat Strings. A SIMPLE-STRING is any string, thin or fat, that is a SIMPLE-ARRAY. Since I don't know what the SIMPLE-STRING type is for, I don't know whether allowing SIMPLE-STRINGs to be fat is good or bad. For those of us without microcoded type-dispatch, Simple-String is a very important concept. It says that you can access the Nth character in the string simply by indexing off the address of the string by N bytes (maybe adding some fixed offset). On a lot of machines that is one or two instructions, and no conditionals. If Simple-Strings can be either fat or thin, then you have to make a runtime decision about whether to index by N bytes or 2N bytes. So it is best to reserve Simple-String for simple thin strings and maybe add another type for Simple-Fat-String. -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 17:07:59 EDT Received: from SCRC-QUABBIN.ARPA by SU-AI.ARPA with TCP; 30 May 86 14:00:58 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by SCRC-QUABBIN.ARPA via CHAOS with CHAOS-MAIL id 4349; Fri 30-May-86 17:01:25 EDT Date: Fri, 30 May 86 16:58 EDT From: David A. Moon Subject: long-char, kanji To: Masayuki Ida cc: common-lisp@SU-AI.ARPA In-Reply-To: <8605300555.AA07406@ccut.u-tokyo.junet> Message-ID: <860530165813.7.MOON@EUPHRATES.SCRC.Symbolics.COM> Here's how the character and string data types are organized in the Symbolics system (Release 7 happened to be the version I looked at), which supports Kanji as well as other extensions beyond the Common Lisp standard-chars. Perhaps this can serve as a guide, as one possible technique for extending Common Lisp characters that is demonstrated to work. STANDARD-CHAR is a subtype of STRING-CHAR STRING-CHAR is a subtype of CHARACTER FAT-CHAR is a subtype of CHARACTER FAT-CHAR and STRING-CHAR are an exhaustive partition of CHARACTER THIN-STRING and FAT-STRING are an exhaustive partition of STRING THIN-STRING is (VECTOR STRING-CHAR) FAT-STRING is (VECTOR (OR FAT-CHAR STRING-CHAR)) STANDARD-CHAR includes only the 96 characters that Common Lisp says it includes. STRING-CHAR includes a few dozen additional characters, and has a representation that is 8 bits wide. FAT-CHAR has a representation that is 28 bits wide and can express all other characters that we support. Note that there are slight deviations from the Common Lisp manual here. It's not true in our system that any character whose bits and font are zero is a STRING-CHAR, and it's not true that any STRING is a VECTOR of STRING-CHAR. These deviations are necessary and I don't think they depart from the spirit of the language. CHAR-CODE-LIMIT is 65536; the other bits in a FAT-CHAR are used for CHAR-BITS and CHAR-STYLE. CHAR-FONT-LIMIT is 1; that is, Symbolics does not use CHAR-FONT. CHAR-STYLE is a Symbolics extension that is used to express how the character is portrayed (size, italicization, boldface, typeface, etc.). Character codes are assigned dynamically. In files and for interchange, FAT-CHARs are represented not by the binary representation used in memory, but by a more symbolic representation involving the names of the character set and character style. This is the default; other representations can be used for interchange with other systems. Thus interchange with JIS 6226 and Hankaku would be equally possible; there is no assumption that the codes used internally and the codes used externally are the same. A SIMPLE-STRING is any string, thin or fat, that is a SIMPLE-ARRAY. Since I don't know what the SIMPLE-STRING type is for, I don't know whether allowing SIMPLE-STRINGs to be fat is good or bad. Note: FAT-CHAR, THIN-STRING, and FAT-STRING are not actually accepted by TYPEP (probably they ought to be), but there are predicates to test for these types. Usually the types are invisible; string-valued functions produce thin or fat strings as necessary, depending on the contents of the string. Note that the difference between thin and fat characters is completely transparent to the user, except that you cannot store a fat character into a thin string. There is no such thing as dual representations of a character. In this way thin and fat characters are very analogous to fixnums and bignums. I feel that this property is very important for the usability of the system. We haven't found a need for your STRING-NORMALIZE function, perhaps just because we don't worry that much about saving storage by making strings thin whenever possible. We do have a function ASSURE-FAT-STRING that goes the other way; if its argument is a THIN-STRING, it makes a new FAT-STRING that contains the same characters and returns it.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 16:20:05 EDT Received: from KIM.Berkeley.EDU by SU-AI.ARPA with TCP; 30 May 86 13:08:35 PDT Received: by kim.Berkeley.EDU (5.51/1.12) id AA26813; Fri, 30 May 86 13:08:35 PDT Received: from fimass by franz (5.5/3.14) id AA03277; Fri, 30 May 86 11:51:36 PDT Received: by fimass (5.5/3.14) id AA00215; Fri, 30 May 86 09:50:14 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8605301750.AA00215@fimass> To: ucbkim!Xerox.COM!miller.pa Cc: common-lisp@su-ai.arpa Subject: Re: Some questions In-Reply-To: Your message of 29 May 86 21:21:00 PDT. <860529-212115-1390@Xerox> Date: Fri, 30 May 86 09:50:10 PST >> Do you transform the source to continuation passing style before >> compiling? >> How happy are you with this decision? >> If you don't, do you do tail-recursion optimizations anyway? The Franz Inc. ExCL (Extended Common Lisp) compiler does not transform to a continuation passing style before compilation. It detects tail-recursive calls and eliminates some self-tail-recursive calls and will soon give the user the option of eliminating non-self tail-recursive calls on certain architectures. Since Rob gave a description of some other compilers, I'll briefly describe ours: It was written from scratch for Common Lisp (it is *not* based on either the Spice Lisp compiler or the Franz Lisp compiler). It has four passes, the first being completely machine independent and the last completely machine dependent, with the middle passes being somewhere in between (i.e. machine independent skeletons with machine dependent flesh). Optimizations are done in each pass. One feature of the ExCL compiler that distinguishes it from other 'stock hardware' compilers is that by default it generates code which is as safe as interpreted code (with very low overhead). You can make it unsafe (to various degrees) by adding type declarations and/or using the optimization declaration, and this indeed helps in very frequently executed code, but in most code the speed difference between being safe and unsafe is imperceptible, and the benefits of being safe are tremendous. -john foderaro, franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 12:32:46 EDT Received: from MC.LCS.MIT.EDU by SU-AI.ARPA with TCP; 30 May 86 09:25:56 PDT Date: Fri, 30 May 1986 12:25 EDT Message-ID: From: BROOKS%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU To: miller.pa@XEROX.COM, Common-lisp@SU-AI.ARPA Subject: Some questions In-reply-to: Msg of 30 May 1986 03:29-EDT from Rob MacLachlan Date: Friday, 30 May 1986 03:29-EDT From: Rob MacLachlan To: miller.pa at XEROX.COM cc: Common-lisp at SU-AI.ARPA Re: Some questions ... this compiler. Some exceptions are Symbolics, Lucid and KCL. The Lucid compiler was originally based on the S1 compiler, but has reportedly been largely rewritten. The Lucid compiler is probably close to state of the art, but they are unlikely to talk about it in public. As principal author of the Lucid compiler I want to make a slight clarification. It is certainly an intellectual descendant of the S1 compiler, which in turn is based largely on Steele's Rabbit compiler. However, it was written from scratch for a Common Lisp subset and I have not even looked at the S1 code from about 6 months before coding started in August 83. It was rather different in internal organization from the start, and that difference has been magnified in converting it to handle full common lisp and multiple target machines. There will be a paper on the compiler in the ACM Lisp and Functional Programming Symposium to be held in Cambridge in early August.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 11:58:08 EDT Received: from GSWD-VMS.ARPA by SU-AI.ARPA with TCP; 30 May 86 08:47:33 PDT Received: from ccvaxa.GSD (ccvaxa.ARPA) by gswd-vms.ARPA (5.9/) id AA01169; Fri, 30 May 86 10:47:31 CDT Message-Id: <8605301547.AA01169@gswd-vms.ARPA> Date: Fri, 30 May 86 10:43:23 cdt From: preece%ccvaxa@gswd-vms.ARPA (Scott E. Preece) To: COMMON-LISP@su-ai.arpa Subject: Defstruct default values Maybe I'm missing something, but the book says that the default-init is evaluated =each time= a structure is to be constructed [emphasis per CLtL]. > From: Rob MacLachlan > > I propose the following "clarification": > > The default value for a defstruct slot need not be of the type > indicated in the :type slot option as long as the default is never > used. > > This is obviously semantically barfucious, but is essential for anyone > who ever actually uses the :type option, since there is no way to > specify a type without specifiying a default. In many cases it is > incredibly difficult to come up with a default expression that you > can type in which will evaluate to an object of the correct type. I > have better things to do with my time than write hairy code which is > never evaluated. -- scott preece gould/csd - urbana ihnp4!uiucdcs!ccvaxa!preece  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 10:13:48 EDT Received: from CSNET-RELAY.ARPA by SU-AI.ARPA with TCP; 30 May 86 07:02:01 PDT Received: from utokyo-relay by csnet-relay.csnet id a010279; 30 May 86 9:56 EDT Received: by u-tokyo.junet (4.12/4.9J-1[JUNET-CSNET]) id AA00256; Fri, 30 May 86 20:32:15+0900 Received: by ccut.u-tokyo.junet (4.12/6.1Junet) id AA07406; Fri, 30 May 86 14:55:14+0900 Date: Fri, 30 May 86 14:55:14+0900 From: Masayuki Ida Message-Id: <8605300555.AA07406@ccut.u-tokyo.junet> To: common-lisp@SU-AI.ARPA, ida@UTOKYO-RELAY.CSNET, rwk@SCRC-YUKON.ARPA Subject: long-char, kanji >Date: Sun, 11 May 86 16:51 EDT >From: "Robert W. Kerns" >Subject: The first note on kanji, sent to junet site in Jan 1986 and some reactions in japan >In-Reply-To: <8605100330.AA08572@tansei.utyo.junet> >Message-Id: <860511165153.6.RWK@WHITE-BIRD.SCRC.Symbolics.COM> > Prior to reply the mail, I want to summarize my way of understanding. The reason why I send this mail is the first mail on KANJI to common-lisp at su-ai was not helpfull for the discussion. This issue is also related to the relation between character data type and string data type. CLtL says, T > Character > string-char > standard-char string = (array string-char (*)) = (vector string-char) string-char type object has zero value for font and bits attributes. standard-char type object is a character among (ASCII) 95 characters and #\newline. The basic idea of my draft: add long-char, or extended-string-char. which is needed to represent multi-byte characters. (in the last mail, I use the word "japanese-char" instead. I realized it was a poor choice of naming. Here, I use "long-char" for multi-byte characters. But the naming is temporary.) The opinions behind the attempt to add long-char: There are many Lisps which can not handle multi-byte characters correctly. Many implementors and users wanted to have a common way to handle japanese characters. Related facts (but only for information): Each character of standard-char type have another representation in JIS 6226 two-byte representation, which I call here regular-long-char. Further, almost all the machines in Japan has another representation in two-byte representation, which I call it hankaku-long-char. Namely, "A" say, can be represented as a standard-char, regular-long-char or hankaku-long-char. Furthermore, " " (blank character), ",", ".", "(",")" can have three different representations ! The basic issues: Is the long-char a subtype of character ? Is the long-char a subtype of string-char ? What is the relation between standard-char and long-char ? Can a vector of long-char be a component of a string ? If the long-char is separated from string-char, it should have font-attribute or no? --- Selection 1 --- make long-char be a subtype of string-char, i.e. string-char > long-char. long-char and standard-char are disjoint. --- Selection 2 --- make long-char be a subtype of character type, i.e. character > long-char. and, string-char and long-char are disjoint. --- Selection 3 --- make standard-char be a subtype of long-char, i.e. string-char > long-char > standard-char. Possible consequences due to the above selections Selection 1: long-char (2 byte or more) and standard-char(1 byte) can be mixed in a string. --> It seems to be very heavy for general purpose machines, to support ELT, LENGTH etc. correctly. And user may confuse on writing software. --> add string-normalize function. (string-normalize x) means, if x is purely composed of standard-char then return x. if x is purely composed of long-char then return x. if x is mix-composed of subtypes of string-char, then if all the characters of x can be representable in standard-char then each character is converted to the representation in standard-char, elseif there is at least one character which can only be representable by long-char, then all characters are converted to long-char representation. else error. Selection 2: New problem will come. That is, Can long-char type have non-zero value for char-font and char-bits ? --> I feel the asnwer should be "NO". --> (vector long-char) can not be "string", because string is (vector string-char) and string-char and long-char are assumed to be disjoint. --> Need another type of string, say long-char-based-string, which is parallel to string, but is disjoint to string. I did not have a firm idea about selection yet. At least in japan, we have to settle the matter with several computer languages such as Cobol, Fortran, C, Ada,... and Common Lisp. Masayuki Ida ida%utokyo-relay.csnet@csnet-relay.arpa ----  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 04:02:45 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 00:45:14 PDT Received: ID ; Fri 30 May 86 03:44:34-EDT Date: Fri, 30 May 1986 03:44 EDT Message-ID: From: Rob MacLachlan To: common-lisp@SU-AI.ARPA Subject: Defstruct default values I propose the following "clarification": The default value for a defstruct slot need not be of the type indicated in the :type slot option as long as the default is never used. This is obviously semantically barfucious, but is essential for anyone who ever actually uses the :type option, since there is no way to specify a type without specifiying a default. In many cases it is incredibly difficult to come up with a default expression that you can type in which will evaluate to an object of the correct type. I have better things to do with my time than write hairy code which is never evaluated. Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 03:45:22 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 30 May 86 00:29:50 PDT Received: ID ; Fri 30 May 86 03:29:10-EDT Date: Fri, 30 May 1986 03:29 EDT Message-ID: From: Rob MacLachlan To: miller.pa@XEROX.COM Cc: Common-lisp@SU-AI.ARPA Subject: Some questions In-reply-to: Msg of 30 May 1986 00:21-EDT from miller.pa at Xerox.COM Date: Friday, 30 May 1986 00:21-EDT From: miller.pa at Xerox.COM To: Common-lisp at SU-AI.ARPA Re: Some questions Now that all you people and companies out there are doing all these common-lisp compilers, I'd like to ask you some questions: Do you transform the source to continuation passing style before compiling? How happy are you with this decision? If you don't, do you do tail-recursion optimizations anyway? If you do, do you do multiple values by calling the continuation with multiple arguments? The current Spice Lisp compiler is basically a one-pass compiler which goes directly from s-expressions to lap code. Closure variables are implemented by patching the lap code when a reference is a closure is discovered. The compiler is not tail-recursive, but tail-recursive self-calls turn into branches. Support for Common Lisp features not in Maclisp is generally poor. Many of the Common Lisps floating around are based on Spice Lisp and this compiler. Some exceptions are Symbolics, Lucid and KCL. The Lucid compiler was originally based on the S1 compiler, but has reportedly been largely rewritten. The Lucid compiler is probably close to state of the art, but they are unlikely to talk about it in public. I am currently writing a new compiler for Spice Lisp which is designed for Common Lisp and designed for portability. It will be tail-recursive, but I am not using CPS per se. The internal representation is not a syntax tree at all, but a flow-graph like representation optimized for flow analysis. The idea is that most analysis and optimization will be done using flow analysis rather than tree walks. It is too early to tell whether this is a good idea. Should the parameter list in a multiple-value-bind allow :optional, :rest, etc... What level is this question at? CLTL clearly doesn't allow any of the lambda-list keywords in the variable list for multiple-value-bind, but if you just macroexpand multiple-value-bind to a multiple-value-call of a lambda, then it isn't very hard to do. When the silver book says that something has dynamic extent, it is allowed for an implementation to provide indefinite extent, since "it is [only] an error" to try to interact with a value whose extent has expired. Providing indefinite extent would be a clean way for an implementation to offer upwards compatible extensions of the language. This would be particularly useful for catch tags. It is meaningless to talk about extending Common Lisp CATCH to have indefinite extent, since it has dynamic scope. Extending BLOCK/RETURN-FROM is a possibility, but I doubt that there's a great deal of enthusiasm. How happy are you with packages? In particular, for those familiar T's reified lexical environments (aka LOCALEs), can you think of any reason for preferring packages? Packages cause a lot of grief, especially for new users. I think that schemes like locales that separate values from names are cleaner, but the historical compatibility imperative for Common Lisp probably makes such schemes impossible. This is because the dialects with which compatibility was desired have always confused names and values. In any case, it is clearly too late to replace packages with anything very different. Can the global scoping environment consistently be considered to simply be the outermost lexical environment? This question doesn't make a great deal of sense in Common Lisp, since it isn't block-structured. Some operations always manipulate the global environment (DEFUN), others create local definitions (LET). You can define some things globally which you can't define locally, (types, constants), so the answer to you question may be no. Can T's LOCALEs be added to the language in an upwards compatible way? What do you mean by upward compatible? It would certainly be pretty gruesome using two namespace management systems simultaneously. The separate function, value and property-list cells would certainly hair up any LOCALE-like scheme. Has anybody given any thought to defining a formal semantics for common-lisp? Do you think there is any hope for such a thing? We don't even have an informal definition yet... Rob  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 02:22:34 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 29 May 86 23:17:28 PDT Received: by su-shasta.arpa; Thu, 29 May 86 23:15:46 PDT Received: by ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA03021; Fri, 30 May 86 13:25:17 jst From: yuasa@kurims.kurims.kyoto-u.junet Received: by ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA02950; Fri, 30 May 86 13:23:38 jst Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00627; Fri, 30 May 86 13:07:44+0900 Date: Fri, 30 May 86 13:07:44+0900 Message-Id: <8605300407.AA00627@kurims.kyoto-u.junet> To: common-lisp@su-ai.arpa Subject: Sorry, KEYWORD should be there. Sorry, I was wrong. The symbol KEYWORD should be in the LISP package. I should put it in my implementaion... -- Taiichi  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 01:58:28 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 29 May 86 22:49:49 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 11942; Fri 30-May-86 01:47:30 EDT Date: Fri, 30 May 86 01:47 EDT From: David A. Moon Subject: portability of pathnames To: common-lisp@SU-AI.ARPA In-Reply-To: <12210693019.8.LOOSEMORE@UTAH-20.ARPA> Message-ID: <860530014715.8.MOON@EUPHRATES.SCRC.Symbolics.COM> I completely understand and sympathize with Loosemore's problems with pathnames, which are really problems with doing file operations in a generic way that works on a wide variety of operating systems with wildly varying restrictions on what their file systems can do. It's quite a difficult problem. The Symbolics system contains a huge amount of mechanism for dealing with problems of this sort. The designers of Common Lisp felt that this mechanism was hairy and unnecessary (which was incorrect) and should not be in Common Lisp because it was too complex (which was probably correct, at least at the time). Not having this in Common Lisp leaves users in a bind, of course. Anyone who wanted to solve such problems would do well to start by reading the Symbolics documentation and talking to some Symbolics users who have used a variety of file systems.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 30 May 86 00:28:24 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 29 May 86 21:21:30 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 29 MAY 86 21:21:15 PDT Date: 29 May 86 21:21 PDT From: miller.pa@Xerox.COM Subject: Some questions To: Common-lisp@SU-AI.ARPA Message-ID: <860529-212115-1390@Xerox> Now that all you people and companies out there are doing all these common-lisp compilers, I'd like to ask you some questions: Do you transform the source to continuation passing style before compiling? How happy are you with this decision? If you don't, do you do tail-recursion optimizations anyway? If you do, do you do multiple values by calling the continuation with multiple arguments? Should the parameter list in a multiple-value-bind allow :optional, :rest, etc... When the silver book says that something has dynamic extent, it is allowed for an implementation to provide indefinite extent, since "it is [only] an error" to try to interact with a value whose extent has expired. Providing indefinite extent would be a clean way for an implementation to offer upwards compatible extensions of the language. This would be particularly useful for catch tags. Do you provide indefinite extent for anything for which dynamic extent is all that's required? Does a program have any portable way of testing whether such an extension is available? (e.g. *features*) Should there be? How happy are you with packages? In particular, for those familiar T's reified lexical environments (aka LOCALEs), can you think of any reason for preferring packages? Can the global scoping environment consistently be considered to simply be the outermost lexical environment? Can T's LOCALEs be added to the language in an upwards compatible way? Has anybody given any thought to defining a formal semantics for common-lisp? Do you think there is any hope for such a thing? What about a simple explanatory meta-interpreter (in the Scheme tradition)? Any common-lisp partial evaluators out there? MarkM  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 22:31:35 EDT Received: from UTAH-20.ARPA by SU-AI.ARPA with TCP; 29 May 86 19:21:52 PDT Date: Thu 29 May 86 20:20:04-MDT From: SANDRA Subject: portability of pathnames To: common-lisp@SU-AI.ARPA cc: jar@AI.AI.MIT.EDU Message-ID: <12210693019.8.LOOSEMORE@UTAH-20.ARPA> Date: Thu, 29 May 86 16:21 EDT From: Jonathan A Rees To: LOOSEMORE@UTAH-20.ARPA Date: Tue 27 May 86 11:41:42-MDT From: SANDRA 1. You can't count on being able to directly manipulate pathname components. Explicit namestrings generally lose too. I can believe that namestrings lose, but I think an explanation of why pathnames lose would be very instructive. Please send a message to common-lisp elaborating on this. Thanks. Jonathan The main problem with pathnames is that not all machines that you might want to run Common Lisp on have something that corresponds to each of the various components. A particularly brain-damaged example is the Cray, which (as I understand it) has no directories, no file types, and an 8-character limit on filenames. IBM's running VM/CMS aren't quite as bad off; at least there you can have file types and several virtual devices around, but still no directories, and of course no versions either. Unix has directories, of course, but using a "." to delimit the file type is simply a convention and there are no versions or things that correspond to devices. And handling for host names under Unix is pretty random, depending on which Unix implementation you happen to have. The problem as I see it is with make-pathname, where you have to explicitly give it values for each component. You can't guarantee that a CL implementation on a brand "x" computer will be able to make sense out of all the components, or what values are valid for each component in that implementation or that host. The manual does say that strings are valid for most of the components, but most operating systems have some limitations about the lengths of the components, or what characters are legal in filenames, or require delimiters around the components that are totally nonsensical to other operating systems. Here's an example from the real world. PCLS was mostly developed under Unix, which allows the use of hyphens in filenames, which kind of crept into the names of some of the system modules. (We use "require" to load them during the build process; require passes the symbol-name of the module as the :name argument to make-pathname.) When we took the code over to VMS, we were stuck with all these references to filenames with hyphens in them, which aren't allowed in VMS (yet -- this is supposed to change soon). There is currently a very grody piece of code in there that looks for hyphens in filenames and translates them to underscores. Is make-pathname supposed to signal errors if you feed it components that don't make sense for the given (or default) host? Or is it supposed to try to "patch up" the problem components in some way that makes sense for that host, such as removing or translating illegal characters or converting a Unix directory specification with slashes into a VMS directory specification with square brackets? Or do we just asssume "it is an error" to supply pathname components that are illegal for the given host? -Sandra Loosemore -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 21:13:33 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 29 May 86 18:05:39 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 11421; Thu 29-May-86 18:21:31 EDT Date: Thu, 29 May 86 18:21 EDT From: David A. Moon Subject: Are isomorphic structures EQUAL? To: Jonathan A Rees cc: common-lisp@SU-AI.ARPA In-Reply-To: <"860529142304.2.jar@AI"@ROCKY-GRAZIANO.LCS.MIT.EDU> Message-ID: <860529182101.4.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: Thu, 29 May 86 14:23 EDT From: Jonathan A Rees Date: Wed, 28 May 86 18:37 EDT From: David A. Moon Date: 11 Apr 1986 16:43-EST From: NGALL@G.BBN.COM What happens to implementations that want to represent structures using plain old vectors? How will EQUAL distinguish vectors from structure-vectors? As Rees pointed out, such an implementation would not be legal. I checked CLtL again and found (as far as I could tell) that it IS permissible for an implementation to represent structures as vectors, or as numbers, packages, symbols, or any other kind of object. But it's not permissible to implement them as -plain- -old- vectors. They have to be distinguishable, at least because the language requires structures to print differently from vectors. It's true as far as I can see that the language allows, after (defstruct foo bar), (subtypep 'foo 'vector) to be either true or false, on an implementation-dependent basis, and (typep (make-foo) 'vector) to be either true or false. In my implementation, for example, the subtypep is false and the typep is true (I wonder if that's a bug?). I would like to change the language so that the the type of structures (whose DEFSTRUCT doesn't use :TYPE) is disjoint from other types. Your arguments for this are good, but on the other hand this change would be adding substantial new constraints on implementations, wouldn't it? Probably the reason why CLtL is so coy about exactly how structures are implemented is to maximize implementation freedom for some reason.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 18:01:06 EDT Received: from SCRC-YUKON.ARPA by SU-AI.ARPA with TCP; 29 May 86 14:50:44 PDT Received: from WHITE-BIRD.SCRC.Symbolics.COM by SCRC-YUKON.ARPA via CHAOS with CHAOS-MAIL id 27214; Thu 29-May-86 05:45:35 EDT Date: Thu, 29 May 86 05:41 EDT From: Robert W. Kerns Subject: keyword To: common-lisp@SU-AI.ARPA In-Reply-To: <8605280139.AA00192@kurims.kyoto-u.junet> Supersedes: <860529053929.0.RWK@WHITE-BIRD.SCRC.Symbolics.COM> Message-ID: <860529054123.1.RWK@WHITE-BIRD.SCRC.Symbolics.COM> [I can't seem to mail to yuasa@kurims.kurims.kyoto-u.junet directly; hope he receives it via the list]. Date: Wed, 28 May 86 10:39:30+0900 From: yuasa@kurims.kurims.kyoto-u.junet Date: Fri, 23 May 86 17:34 EDT From: Charles Hornig Date: Fri, 23 May 86 05:50:43 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Is there an official 'online' list of the external lisp package symbols implied by the Common Lisp book? I would add COMPILATION-SPEED and remove SEQUENCEP. I would remove KEYWORD. -- Taiichi I would put it back! See the list of type specifiers on page 43.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 17:45:36 EDT Received: from [192.10.41.223] by SU-AI.ARPA with TCP; 29 May 86 14:29:42 PDT Received: from FIREBIRD.SCRC.Symbolics.COM by SAPSUCKER.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 24635; Thu 29-May-86 10:02:30 EDT Date: Thu, 29 May 86 10:07 EDT From: David C. Plummer Subject: *DEBUG-IO* To: Kent M Pitman , COMMON-LISP@SU-AI.ARPA In-Reply-To: <860529012732.4.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Message-ID: <860529100713.6.DCP@FIREBIRD.SCRC.Symbolics.COM> Date: Thu, 29 May 86 01:27 EDT From: Kent M Pitman I didn't realize that the *xxx-IO* variables were documented to contain synonym streams. In fact, I wish this had been disallowed. The reason this came up is that I had an application which wanted to temporarily use the *DEBUG-IO* for a normal interaction. I thought I was doing the right thing by doing: (LET ((*TERMINAL-IO* *DEBUG-IO*)) ...) but in fact I lost completely in 3600 Release 6 Common Lisp because *DEBUG-IO* had (correctly) contained a synonym stream for *TERMINAL-IO* and I ended up with a circular synonym stream in *TERMINAL-IO*. Shouldn't you have done (let ((*standard-output* *debug-io*) (*standard-input* *debug-io*)) ...) instead? It is almost always wrong to set or bind *terminal-io*.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 17:08:56 EDT Received: from [192.5.104.199] by SU-AI.ARPA with TCP; 29 May 86 13:57:56 PDT Received: from BOETHIUS.THINK.COM by Godot.Think.COM; Thu, 29 May 86 16:57:33 edt Date: Thu, 29 May 86 16:58 EDT From: Guy Steele Subject: Is &body REALLY like &rest? To: Pavel.pa@Xerox.COM, Common-lisp@SU-AI.ARPA Cc: gls@AQUINAS In-Reply-To: <860528-160852-1701@Xerox> Message-Id: <860529165824.2.GLS@BOETHIUS.THINK.COM> Date: 28 May 86 16:08 PDT From: Pavel.pa@Xerox.COM Another in a long series of clarifications of clarifications is requested. In CLtL, page 145, it says ``&body This is identical in function to &rest...'' In Guy's clarifications, it says ``145 Extend the syntax of an &body parameter to DEFMACRO to allow writing &body (body-var [declarations-var [doc-var]])'' so as to make it easy to call PARSE-BODY. ... I agree with KMP's remarks on this subject. I would also like to point out that this change is, I believe, not from the list of clarifications but from the list of suggested changes. --Guy  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 16:50:40 EDT Received: from MC.LCS.MIT.EDU by SU-AI.ARPA with TCP; 29 May 86 13:39:29 PDT Received: from MX.LCS.MIT.EDU by MC.LCS.MIT.EDU via Chaosnet; 29 MAY 86 16:23:36 EDT Received: from ROCKY-GRAZIANO.LCS.MIT.EDU by MX.LCS.MIT.EDU via Chaosnet; 29 MAY 86 14:24:05 EDT Date: Thu, 29 May 86 14:23 EDT From: Jonathan A Rees Subject: Are isomorphic structures EQUAL? To: Moon@SCRC-STONY-BROOK.ARPA, common-lisp@SU-AI.ARPA In-Reply-To: <860528183719.5.MOON@EUPHRATES.SCRC.Symbolics.COM> Message-ID: <"860529142304.2.jar@AI"@ROCKY-GRAZIANO.LCS.MIT.EDU> Date: Wed, 28 May 86 18:37 EDT From: David A. Moon Date: 11 Apr 1986 16:43-EST From: NGALL@G.BBN.COM What happens to implementations that want to represent structures using plain old vectors? How will EQUAL distinguish vectors from structure-vectors? As Rees pointed out, such an implementation would not be legal. I checked CLtL again and found (as far as I could tell) that it IS permissible for an implementation to represent structures as vectors, or as numbers, packages, symbols, or any other kind of object. I would like to change the language so that the the type of structures (whose DEFSTRUCT doesn't use :TYPE) is disjoint from other types. I think this is desirable both because it's a generally useful feature (one can create new opaque types, and rely on disjointness) and because it makes it easier to avoid implementation dependencies. It is very easy now to unwittingly assume in a program that one's structures aren't e.g. symbols or vectors or functions, and have the program work in 9 out of 10 Common Lisp implementations, and then fail in a very obscure way when moved to a new implementation. I don't much care whether EQUAL descends into structures, as long as it's consistent within and between implementations. I would mildly object to having it recursively compare components because that seems to be a violation of the data abstraction capability that structures are supposed to provide. People who use :TYPE have to be aware that they can have problems if they change an implementation from a transparent one to an opaque one; not only EQUAL, but also CONCATENATE and TYPEP will change their behavior. Jonathan  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 16:38:39 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 29 May 86 13:31:24 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 10955; Thu 29-May-86 13:03:23 EDT Date: Thu, 29 May 86 13:03 EDT From: Kent M Pitman Subject: *DEBUG-IO* To: DCP@SCRC-STONY-BROOK.ARPA cc: KMP@SCRC-STONY-BROOK.ARPA, Common-Lisp@SU-AI.ARPA In-Reply-To: <860529100713.6.DCP@FIREBIRD.SCRC.Symbolics.COM> Message-ID: <860529130333.0.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> I wanted code to behave temporarily as if *debug-io* were the terminal. Arguably, I should have done (let ((*terminal-io* *debug-io*) (*standard-input* (make-synonym-stream '*terminal-io*) (*standard-output* (make-synonym-stream '*terminal-io*) ...) ...) but if I had instead done (let ((*standard-input* *debug-io*) (*standard-output* *debug-io*)) ...) then I would have been screwed up by code which inside my LET just did: (let ((*standard-input* (make-synonym-stream '*terminal-io*)) (*standard-output* (make-synonym-stream '*terminal-io*))) ...) which would have them writing on the first terminal (not my intention). The whole point of my binding the window in the first place is that I wanted to leave the main screen undisturbed during a debugging break. If recursive breaks or whatever were permitted a way of realizing that I started off on the original window, I wouldn't get the effect I wanted. This technique is highly useful in split-screen debugging of graphics code.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 10:31:29 EDT Received: from UTAH-20.ARPA by SU-AI.ARPA with TCP; 29 May 86 07:26:39 PDT Date: Thu 29 May 86 08:24:49-MDT From: SANDRA Subject: Apology To: common-lisp@SU-AI.ARPA Message-ID: <12210562811.21.LOOSEMORE@UTAH-20.ARPA> It appears I have unintentionally offended a number of residents of San Francisco with a remark in my last posting that was intended to be humorous. That is such an old joke it never occurred to me that people would not recognize it as such. I apologize, and in the future I will try to be more consistent with the use of smiley faces. -Sandra Loosemore -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 05:16:36 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 29 May 86 01:58:08 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 10633; Thu 29-May-86 04:55:13 EDT Date: Thu, 29 May 86 04:55 EDT From: Kent M Pitman Subject: Is &body REALLY like &rest? To: Pavel.pa@XEROX.ARPA cc: Common-lisp@SU-AI.ARPA, KMP@SCRC-STONY-BROOK.ARPA In-Reply-To: <860528-160852-1701@Xerox> Message-ID: <860529045522.4.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Date: 28 May 86 16:08 PDT From: Pavel.pa@Xerox.COM Subject: Is &body REALLY like &rest? Another in a long series of clarifications of clarifications is requested. In CLtL, page 145, it says ``&body This is identical in function to &rest...'' In Guy's clarifications, it says ``145 Extend the syntax of an &body parameter to DEFMACRO to allow writing &body (body-var [declarations-var [doc-var]])'' so as to make it easy to call PARSE-BODY. In CLtL, page 146, it says ``Anywhere in the lambda-list where a parameter name may appear and where ordinary lambda-list syntax ... does not otherwise allow a list, a lambda-list may appear in place of the parameter name.'' This allows for destructuring. Question: Does the following DEFMACRO do destructuring or does it call PARSE-BODY? (defmacro foo (&rest (a b c)) (bar a b c)) The real question is whether or not the clarification introduced a difference in the meanings of &rest and &body. One possible interpretation is that a list after &rest does destructuring and a list after &body implies a call to PARSE-BODY. Which one is right? I'm the one that suggested this extension to &BODY, so I have some remarks to make about this... Before saying anything, let me just remark that all of your excerpts above are taken from the DEFMACRO section. In normal LAMBDA expressions, &BODY is not a valid keyword; only &REST is allowed there. We wouldn't have had both &REST and &BODY if there weren't at least some reason for them being allowed to differ. The high-level reason for making the distinction is that some forms, like DEFUN and LET back-indent their bodies in some editors. eg, DEFUN indents like: (DEFUN FOO (X Y Z) (DECLARE (SPECIAL X)) (BAR Y Z)) The way you get things to indent that way on some systems (eg, the Lisp Machine) is by writing: (DEFMACRO DEFUN (NAME BVL &BODY FORMS) ...) If you'd used &REST instead of &BODY, it would have wanted to indent like: (DEFUN FOO (X Y Z) (DECLARE (SPECIAL X)) (BAR Y Z)) The case where &BODY is used is in this case where there's going to be a body. That's distinguished from something like SETQ which has no body. So you'd write (DEFMACRO SETQ (&REST PAIRS) ...) rather than using &BODY to get indentation like: (SETQ X Y Z W Q R) rather than (SETQ X Y Z W Q R) Not completely by coincidence, bodies often have declarations in them, and rest arguments seldom do. As such, there's no reason to suppose that &REST should continue to be so similar to &BODY since it would generally be meaningless to want to parse declarations in a SETQ. I'd vote to not let &REST use these extra arguments. On the other hand, when I first suggested this, I am pretty sure I suggested doing &BODY body-var [declarations-var [doc-var]] rather than &BODY (body-var [declarations-var [doc-var]]) since there's no other possible interpretation to variables following the body variable. Not only is my original version of this extension completely upward-compatible with existing code (since it doesn't change something that destructured into something that does something completely different), but it also allows better syntax for defaulting the declarations and/or doc. eg, (DEFMACRO DEFUN-SPECIAL (NAME VARS &BODY FORMS (DECLARATIONS `((DECLARE (SPECIAL ,@VARS)))) (DOCUMENTATION (GUESS-SOME-DOC NAME))) ...) might specify how to create default DECLARATIONS and DOCUMENTATION in the absence of any explicit declarations and documentation.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 01:55:32 EDT Received: from [128.32.130.7] by SU-AI.ARPA with TCP; 28 May 86 22:43:11 PDT Received: by kim.Berkeley.EDU (5.51/1.12) id AA18965; Wed, 28 May 86 22:43:01 PDT Received: from fimass by franz (5.5/3.14) id AA27442; Wed, 28 May 86 22:34:28 PDT Received: by fimass (5.5/3.14) id AA01583; Wed, 28 May 86 21:32:49 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Return-Path: Message-Id: <8605290532.AA01583@fimass> To: common-lisp@su-ai.arpa In-Reply-To: Your message of Wed, 28 May 86 10:39:30 V. <8605280139.AA00192@kurims.kyoto-u.junet> Date: Wed, 28 May 86 21:32:43 PST >> I would remove KEYWORD. >> -- Taiichi >> 'keyword' is listed in table 4-1 on page 43. -john foderaro, franz inc.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 29 May 86 01:42:47 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 28 May 86 22:30:08 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 10590; Thu 29-May-86 01:27:43 EDT Date: Thu, 29 May 86 01:27 EDT From: Kent M Pitman Subject: *DEBUG-IO* To: COMMON-LISP@SU-AI.ARPA cc: KMP@SCRC-STONY-BROOK.ARPA Message-ID: <860529012732.4.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> I didn't realize that the *xxx-IO* variables were documented to contain synonym streams. In fact, I wish this had been disallowed. The reason this came up is that I had an application which wanted to temporarily use the *DEBUG-IO* for a normal interaction. I thought I was doing the right thing by doing: (LET ((*TERMINAL-IO* *DEBUG-IO*)) ...) but in fact I lost completely in 3600 Release 6 Common Lisp because *DEBUG-IO* had (correctly) contained a synonym stream for *TERMINAL-IO* and I ended up with a circular synonym stream in *TERMINAL-IO*. At the very least, the manual should contain a mention of this problem at the top of p329 where it talks about why you shouldn't change *TERMINAL-IO*. However, more importantly, we should really have and encourage the use of a function like the LispM's SI:FOLLOW-SYN-STREAM which recursively dereferences a synonym stream and gives you a stream that's safe to move around. That way, I could safely write: (LET ((*TERMINAL-IO* (FOLLOW-SYNONYM-STREAM *DEBUG-IO*))) ...) and you wouldn't have to have that warning about bindinging *TERMINAL-IO*. I personally don't see any reason that binding/setting *TERMINAL-IO* should be prohibited as long as the thing to which you bind it represents a valid virtual terminal (whatever that means for the given operating system). Note that there are no portable operations for creating such a virtual terminal, but that doesn't mean that portable code shouldn't be able to manipulate such objects when it runs across them. In particular, I think it should be valid to shuffle the synonym-stream-deferenced contents of *QUERY-IO*, *DEBUG-IO*, and *TERMINAL-IO* back and forth.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 28 May 86 23:16:31 EDT Received: from SCRC-QUABBIN.ARPA by SU-AI.ARPA with TCP; 28 May 86 20:04:30 PDT Received: from EUPHRATES.SCRC.Symbolics.COM by SCRC-QUABBIN.ARPA via CHAOS with CHAOS-MAIL id 3367; Wed 28-May-86 18:40:45 EDT Date: Wed, 28 May 86 18:37 EDT From: David A. Moon Subject: Are isomorphic structures EQUAL? To: common-lisp@SU-AI.ARPA In-Reply-To: <[G.BBN.COM]11-Apr-86 16:43:13.NGALL> Message-ID: <860528183719.5.MOON@EUPHRATES.SCRC.Symbolics.COM> Date: 11 Apr 1986 16:43-EST From: NGALL@G.BBN.COM I don't want to reopen the discussion that we had about equality a while back, but someone in my group just tripped over an ambiguity in the manual. She was using DEFSTRUCT with the (:TYPE ...) option and using EQUAL to compare structures and everything worked fine since EQUAL did a component-wise equality test. Then she got bold and decided to let DEFSTRUCT choose the representation, so she left off the (:TYPE ...) option. Low and behold, EQUAL did not do a component-wise equality test, it tested for EQL! So she looked at the def. on page 80 where it said "Certain objects that have components are EQUAL if they are of the same type and corresponding components are EQUAL." Unfortunately, the rest of the definition did not make it at all clear whether or not objects created by DEFSTRUCT were among these "Certain objects". Looking on the next page at the def. of EQUALP (which happened to do a component-wise equality test), she read "Objects that have components..." The only difference was the word "Certain"! So we tried it on another CL (SymbolicsCL), and got the same behavior (as with VaxLisp). Which leads me to almost believe that EQUAL NOT supposed to to a component-wise test for equality. Except... What happens to implementations that want to represent structures using plain old vectors? How will EQUAL distinguish vectors from structure-vectors? As Rees pointed out, such an implementation would not be legal. This interaction of DEFSTRUCT/EQUAL is going to cause a lot of bugs. People are going to prototype structues using the (:TYPE LIST) option and use EQUAL to do equality tests. Then when they remove the :TYPE option, KaBoom! I don't think that's a plausible way to program, but I agree that "equality" is an area of the language that needs work. There will be no way to do a component-wise test (using EQUAL on each component) on two structures unless one writes a structure-specific equality predicate. Therefore, I propose that two structures of the same type that are created by DEFSTRUCT (regardless of the :TYPE option) be tested for equality component-wise and that the CLtL make this clear. Until then, she can get by with EQUALP (since her structures don't conatin strings, floats, etc.). I don't understand this. I can't find anything in the manual that says that EQUALP behaves differently than EQUAL for structures. Actually I can't find anything in the manual that says anything at all about the behavior of either EQUAL or EQUALP on structures. It may be that by coincidence the two implementations you mentioned in your message both compare components of structures (with the default defstruct options, especially no :TYPE) in EQUALP, and do not compare components of structures in EQUAL, but I don't think the Common Lisp manual says that all implementations have to do that.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 28 May 86 20:51:05 EDT Received: from SU-SHASTA.ARPA by SU-AI.ARPA with TCP; 28 May 86 17:40:19 PDT Received: by su-shasta.arpa; Wed, 28 May 86 17:38:28 PDT Received: by ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA12890; Wed, 28 May 86 11:33:42 jst From: yuasa@kurims.kurims.kyoto-u.junet Received: by ntt.junet (4.12/4.7JC-7) CHAOS with CHAOS-MAIL id AA12880; Wed, 28 May 86 11:33:08 jst Received: by kurims.kyoto-u.junet (2.0/4.7) id AA00192; Wed, 28 May 86 10:39:30+0900 Date: Wed, 28 May 86 10:39:30+0900 Message-Id: <8605280139.AA00192@kurims.kyoto-u.junet> To: common-lisp@su-ai.arpa Date: Fri, 23 May 86 17:34 EDT From: Charles Hornig Date: Fri, 23 May 86 05:50:43 PST From: franz!fimass!jkf@kim.Berkeley.EDU (John Foderaro) Is there an official 'online' list of the external lisp package symbols implied by the Common Lisp book? If not, I'd like to start the process of computing the list by supplying the following list which I believe to be very close to correct. Are there any additions or deletions? ...list removed... I would add COMPILATION-SPEED and remove SEQUENCEP. I would remove KEYWORD. -- Taiichi  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 28 May 86 19:22:16 EDT Received: from XEROX.COM by SU-AI.ARPA with TCP; 28 May 86 16:09:01 PDT Received: from Cabernet.ms by ArpaGateway.ms ; 28 MAY 86 16:08:52 PDT Date: 28 May 86 16:08 PDT From: Pavel.pa@Xerox.COM Subject: Is &body REALLY like &rest? To: Common-lisp@SU-AI.ARPA Message-ID: <860528-160852-1701@Xerox> Another in a long series of clarifications of clarifications is requested. In CLtL, page 145, it says ``&body This is identical in function to &rest...'' In Guy's clarifications, it says ``145 Extend the syntax of an &body parameter to DEFMACRO to allow writing &body (body-var [declarations-var [doc-var]])'' so as to make it easy to call PARSE-BODY. In CLtL, page 146, it says ``Anywhere in the lambda-list where a parameter name may appear and where ordinary lambda-list syntax ... does not otherwise allow a list, a lambda-list may appear in place of the parameter name.'' This allows for destructuring. Question: Does the following DEFMACRO do destructuring or does it call PARSE-BODY? (defmacro foo (&rest (a b c)) (bar a b c)) The real question is whether or not the clarification introduced a difference in the meanings of &rest and &body. One possible interpretation is that a list after &rest does destructuring and a list after &body implies a call to PARSE-BODY. Which one is right? Pavel  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 28 May 86 01:29:48 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 27 May 86 22:22:41 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 9535; Wed 28-May-86 01:20:29 EDT Date: Wed, 28 May 86 01:20 EDT From: Kent M Pitman Subject: PEEK-CHAR To: Dave.Touretzky@CMU-CS-A.ARPA cc: COMMON-LISP@SU-AI.ARPA References: The message of 28 May 86 01:17-EDT from Dave.Touretzky@A.CS.CMU.EDU, <860528010335.6.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Message-ID: <860528012039.7.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Date: 28 May 86 01:17 EDT From: Dave.Touretzky@A.CS.CMU.EDU To: Kent M Pitman Subject: Re: MAKE-ECHO-STREAM, and related issues If you don't want peeks to echo, shouldn't your example print "FOOX" rather than "XFOO" ? You said the opposite in your message. Oops. Yes, indeed. "FOOX" is what I meant to say should happen by default. "XFOO" is the easy one to simulate given the other. Sorry for the confusion; thanks for catching it.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 28 May 86 01:17:01 EDT Received: from [192.10.41.41] by SU-AI.ARPA with TCP; 27 May 86 22:05:50 PDT Received: from RIO-DE-JANEIRO.SCRC.Symbolics.COM by ELEPHANT-BUTTE.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 9525; Wed 28-May-86 01:03:32 EDT Date: Wed, 28 May 86 01:03 EDT From: Kent M Pitman Subject: MAKE-ECHO-STREAM, and related issues To: Common-Lisp@SU-AI.ARPA cc: KMP@SCRC-STONY-BROOK.ARPA Message-ID: <860528010335.6.KMP@RIO-DE-JANEIRO.SCRC.Symbolics.COM> Assuming that "FOO" is a file containing the single character "X", consider the expression: (WITH-OPEN-FILE ((STREAM "FOO" :DIRECTION :INPUT)) (LET ((ECHO-STREAM (MAKE-ECHO-STREAM STREAM *TERMINAL-IO*))) (PEEK-CHAR ECHO-STREAM) (PRIN1 'FOO) (READ-CHAR ECHO-STREAM))) Should this print "XFOO" or "FOOX"? I'll argue strongly that it's pretty critical that it be "XFOO" since it's easy for the user to simulate the "FOOX" behavior and next to impossible to simulate the other behavior. Does anybody buy that? Can anyone find a passage in CLtL where it says one way or the other what happens in this case? The doc on p330 is completely vague on the issue. In general, I think we should be clear on the fact that any peek operation must not echo. For example, a system which is full duplex and not line-at-a-time (and hence must decide to echo the char either at the first peek-char or at the read-char) should wait until the read. Among other things, failure to do this keeps you from writing a reader which can correctly handle a prompted read of foo'foo as two separate expressions. You end up seeing: Form: foo' Form: foo rather than Form: foo Form: 'foo Form: On a line-at-a-time system, where the input is echoed at prescan time, you end up seeing: Form: foo'foo Form: Form: which is not optimal, but is probably tolerable since users of such systems are probably used to this sort of effect. In fact, using LISTEN, you can can write something which (barring timing errors due to high-speed typists) heuristically manages to optimize out the intermediate prompt. Would people be willing to agree that we should have a way to detect whether the terminal, implementation, or whatever was line-at-a-time or not, or is that too ill-defined? It would certainly be very useful to code which wants to do something graceful in the face of radically varying styles of input scanning.  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 27 May 86 17:14:10 EDT Received: from UTAH-20.ARPA by SU-AI.ARPA with TCP; 27 May 86 10:43:35 PDT Date: Tue 27 May 86 11:41:42-MDT From: SANDRA Subject: portability To: common-lisp@SU-AI.ARPA Message-ID: <12210074365.15.LOOSEMORE@UTAH-20.ARPA> Here's my two cents worth on the code portability issue. I think I've probably had as much experience with writing portable Lisp code as anyone here, being one of the chief PCLS architects (currently running under Unix and VMS), and having written a large amount of application code which has been ported to other CL implementations. I'm probably going to get flamed for this, but in my view, the name of the package containing local extensions, or exactly what symbols live in the Lisp package, are fairly trivial issues. My experience has been that, if you want your code to be portable, you have to put a lot of effort into making it that way from the beginning, and there are many problems more troublesome than obscure package manipulations that you have to overcome in the process. Here's a few of the issues that come to mind: 1. You can't count on being able to directly manipulate pathname components. Explicit namestrings generally lose too. 2. There is no agreement on what *features* should contain, or what package the symbols live in. Hence, you can't really depend on #+ and #- to do anything useful. 3. Loading files can really be a nuisance, particularly if you want to be able to look for them in more than one place, or if you want to look for them somewhere other than the default directory. "Require" can be made to do the right thing in some implementations but not others. "Load" is OK if you can somehow get a pathname built. (I don't really have a good solution for this problem; in PCLS we use a few global variables to hold the names of various system-dependent files and where to look for them.) 4. It's amazing how certain implementations that don't admit to being subsets are missing huge amounts of functionality. 5. Handling for streams regarding prompting, buffering, terminal interrupts, etc. varies widely between implementations, depending on what the primitives offered by the operating system are. (The VMS and Unix versions of PCLS don't even behave the same in this respect.) 6. Lack of a portable error-handling mechanism makes it tough to do stand-alone applications that will be used by non-Lisp wizards. (Popping into the debugger is fine for developers, but doesn't do much for random users who think that Lisp is what they speak in San Francisco....) Hopefully this will change in the near future. 7. There is no standardized model about what kinds of processing happens at compile time, particularly regarding defining forms such as deftype and defstruct, or perverse manipulations of the package system (setq'ing *package* directly?) Code that works fine interpretively often fails to compile correctly. My general approach to portability is to avoid the use of implementation- specific or poorly standardized features where possible, and where I must refer to them, put all the references in a separate file and wrap an interface function around them. It makes it a lot easier when everything you need to tweak to get the code to run under a different implementation is all in one place instead of scattered randomly through two dozen files. -Sandra Loosemore -------  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 27 May 86 15:06:02 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 27 May 86 11:45:44 PDT Received: ID ; Tue 27 May 86 14:44:24-EDT Date: Tue, 27 May 1986 14:44 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Subject: Keeping track of decisions As several people have pointed out, we should keep a list of all official changes and decisions in a separate file, in addition to making the appropriate changes in the evolving document. That way, implementors will not have to scan the new manual looking for differences between that ans Steele and wondering if such differences are intentional. This was my intent all along, but I forgot to mention this in the earlier mail. We will probably number and date each of these decisions, just to make it clear when things have gone from discussion to decided (where "decided" means that it will definitely be in the proposal we produce). -- Scott  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 27 May 86 13:04:21 EDT Received: from [192.10.41.223] by SU-AI.ARPA with TCP; 27 May 86 09:54:57 PDT Received: from FIREBIRD.SCRC.Symbolics.COM by SAPSUCKER.SCRC.Symbolics.COM via CHAOS with CHAOS-MAIL id 23555; Tue 27-May-86 12:50:31 EDT Date: Tue, 27 May 86 12:54 EDT From: David C. Plummer Subject: Re: Where Pure Common Lisp lives To: Steven Haflich , snyder%hplsny@HPLABS.ARPA cc: common-lisp@SU-AI.ARPA In-Reply-To: <8605240410.AA02152@mit-ems.ARPA> Message-ID: <860527125422.0.DCP@FIREBIRD.SCRC.Symbolics.COM> Date: Sat, 24 May 86 00:10:06 edt From: Steven Haflich > From @SU-AI.ARPA:snyder%hplsny@hplabs.ARPA Fri May 23 11:58:17 1986 > > Here's an alternative: How about leaving LISP:MAKE-PACKAGE alone, and > defining LOCAL:MAKE-PACKAGE to default to the LOCAL package? You'll have to do the same for LISP:IN-PACKAGE, and you'll have to fix the compiler to give LOCAL:IN-PACKAGE the same special attention it gives LISP:IN-PACKAGE... It's disturbing how rapidly any hacking of the package system gets ugly. It's not in Chapter 11 for nothing...  Received: from SU-AI.ARPA by AI.AI.MIT.EDU 26 May 86 21:09:12 EDT Received: from C.CS.CMU.EDU by SU-AI.ARPA with TCP; 26 May 86 18:01:42 PDT Received: ID ; Mon 26 May 86 21:01:06-EDT Date: Mon, 26 May 1986 21:01 EDT Message-ID: Sender: FAHLMAN@C.CS.CMU.EDU From: "Scott E. Fahlman" To: common-lisp@SU-AI.ARPA Subject: Where we stand A progress report from the Technical Committee is overdue. Here's where things stand: First, the technical and steering committees have held electronic elections. I have been elected chariman of the technical committee through the end of 1986. Bob Mathis has been elected chairman of the steering committee. Both of us have the model that the main function of the chariman is to keep things moving and to moderate the discussions. As his messages have indicated, Bob has been moving along with the process of getting all this organized properly within ANSI and ISO. Second, we have been thinking about how to proceed on the technical side. Most of us favor a model in which we start with an online version of the manual. Issues would be debated on the Common Lisp mailing list and, when things have settled as much as they are going to, the technical committee would make a decision. Each such decision is folded immediately into the manual, which is available online to everyone. At some point we decide we're done and send the document over to ANSI with our recommondation that this become the new standard for Common Lisp; ANSI then decides, and we go from there to ISO. Up to the point where ANSI makes a decision, all of this is just a set of proposals for a language spec, though implementaiton groups are free to act on these recommendations if they choose to. It would probably be best to use the Digital Press book as the starting point if we can obtain the necessary rights; an alternative is to start from a manual Lucid has developed which is meant to describe the same language, but in different words, if we can obtain the necessary rights from Lucid. The worst option is to develop a new docuemnt from scratch -- a lot of redundant work, plus the likelihood that we will introduce some new ambiguities. We need to obtain, from Digital Press or Lucid or both, the right to create a derivitive work that is intended to be used as a specification document, while those companies keep the copyright on their original version. We are trying to work out exactly who should own the rights to the revised document. ANSI normally owns the copyright on their standards and makes some money selling the documents, but in this case we are determined that the final document should be readily available to everyone at a low price and that the text can be used by companies to create online and printed documents for their own products without a lot of red tape. None of us wants to put a lot of work into this document and then find that someone else controls the use of it. As you can imagine, this document situation raises all sorts of complicated legal issues, and it is going to take us some time to cut through it all. Lawyers don't work very fast. I want to resolve a lot of the technical issues over the summer and have a document ready for submission to ANSI by the end of the year, and if we are to meet that goal we must get moving. And so, reluctantly, I propose that we begin trying to resolve issues a few at a time, and that we just record the decisions in an online file until we have a document that we can start working on. This means that we'll need a separate pass later on to incorporate the decisions into a document, and there is the danger that we'll inadvertently introduce new bugs at that time, but I see no good alternative. We can't wait around any longer. If people don't object to this plan, we'll get started soon. I'll begin by sending out a message proposing some principles about how conservative we want to be, and when we've got some general agreement on that we can start on our backlog of issues. In addition to resolving known problems, especially in the area of code portability, we must agree on a reasonably complete error system. It is not absolutely essential, but I believe it would be very beneficial if we can agree on a set of object-oriented facilities in time to get them into this spec. Progress on iteration, windows/graphics, and standard subsets would be welcome, but not essential. -- Scott