From: UnixOS2 Archive To: "UnixOS2 Archive" Date: Fri, 6 Dec 2002 04:42:54 EST-10EDT,10,-1,0,7200,3,-1,0,7200,3600 Subject: [UnixOS2_Archive] No. 390 ************************************************** Thursday 05 December 2002 Number 390 ************************************************** Subjects for today 1 RE: wxWindows-2.3.4 : Stefan Neis 2 RE: wxWindows-2.3.4 : Stefan Neis 3 RE: wxWindows-2.3.4 : Stefan Neis 4 RE: wxWindows-2.3.4 : Stefan Neis 5 Re: Autoconf : Stefan Neis 6 Re: configure - problems, Quoting : Stefan Neis 7 RE: wxWindows-2.3.4 : Stefan Neis 8 RE: wxWindows-2.3.4 : Stefan Neis 9 Re: Mailman getting close : John Poltorak 10 Re: wxWindows-2.3.4 : Ken Ames 11 got lost : Ken Ames 12 Re: Installing autoconf : Christian Hennecke" 13 Re: wxWindows-2.3.4 : Ken Ames 14 RE: wxWindows-2.3.4 : Dave Webster 15 Repository for results of Perl tests : John Poltorak 16 Re: Mailman getting close : Ted Sikora 17 Re: Mailman getting close : Ted Sikora 18 Re: PKGINFO (was: installpkg) : John Poltorak 19 Re: Installing autoconf : Franz Bakan" 20 Re: installpkg : John Poltorak **= Email 1 ==========================** Date: Fri, 6 Dec 2002 00:03:08 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Wed, 4 Dec 2002, Hakan wrote: > which makes me excited (cf. Open32). At that time I did ask for > examples of OS/2 programs, even a small one, which showed off > wxWindows. To the best of my recollection, we were never supplied with > one. I also have a recollection that someone else asked the very same > question over the past few days but again, no examples were offered. And I still won't offer any examples except for suggesting to compile the wxWindows samples yourself or look at the screenshots of them on wxWindows.org. Over there you also might find some references to rather special purpose commercial programs using wxWindows. I could offer you some of "our" (that's my other self working in a company, not this self working at university. ;-) ) own commercial software, however the OS/2 version is currently not yet available, partly for lack of time on my side (some smaller stand-alone parts of it should already compile&link&work with wxOS2 as it currently is), partly because the port to OS/2 is not yet quite complete. The point is that most "programmers" are just programming for the toolkit of the day that they are used to for the platform they are used to, so currently you won't see many cross-platform applications and even much less of them having a GUI. However, it looks like there might be an increasing interest in multi-platform support as more and more companies are using several operating systems on their computers, no longer forcing everyone working for them to use the "standard OS used by everybody else", so this _could_ change in the next years. Also, wxWindows is fairly good at providing GUI support for the various "platform independent" scripting languages, there are wxPerl, wxPython and several others (all of them not yet working for OS/2, but at least for Windows, GTK, Motif and MacOS). Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 2 ==========================** Date: Fri, 6 Dec 2002 00:09:42 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Thu, 5 Dec 2002, Dave Webster wrote: > I always wonder if have this stuff right so I post what I understand, like > throwing it against the wall, and see what sticks. > > I'm running out of disk space on my dev machine now, so having to deal with > TWO CVS trees and my own 5GB+ dev system is going to be dicey. Same problem here, but I already have two CVS trees over here - I guess I can get rid of the 2.2 tree for now, that will help some... Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 3 ==========================** Date: Fri, 6 Dec 2002 00:19:36 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Thu, 5 Dec 2002, Dave Webster wrote: > source code and compile it themselves. That's how it works, that's how it > works in wxWindows. You will NEVER get compiled sample executables with > wxWindows because we cannot provide a multi MB executable for every > combination of platform/compiler. Actually it is usually somewhat different for OS/2, you'll find precompiled versions of gcc, emacs, XFree86 and what not because some people are sacrificing there time/money on downloading, patching, compiling, fixing and uploading the packages (often also kindly adding some hacks to take care of the possibly different installation directories/drives prefered by different users) and we all do appreciate that kind of service. I might even find time during the upcoming holidays to compile versions of wxWindows-2.4.0 for OS/2 (_if_ it is released soon enough) and some of the samples with EMX and upload them to hobbes (and maybe do the same with a GTK+ version of it, so everyone with an X server running on OS/2 can compare side by side), but I won't make any promises. I intended to do exactly that for all wxWindows-2.2.x (GTK+ and/or Lesstif versions only) and actually found the time to do it for only one of them (2.2.5, IIRC). But I most definitely won't waste my time on doing that for one of the developper snapshots. Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 4 ==========================** Date: Fri, 6 Dec 2002 00:21:42 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Thu, 5 Dec 2002, Dave Webster wrote: > funny, my command line VA4 ran the same compiler as the IDE, codestore and > all, and had ENORMOUS memory leaks. Maybe the 'C' compiler was 3.x but no > the C++ one. I see. So I probably got that mixed up (as I said, it was all from reading, not from my own experience). Thanks for clarifying/correcting. Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 5 ==========================** Date: Fri, 6 Dec 2002 00:32:33 +0100 (CET) From: Stefan Neis Subject: Re: Autoconf On Wed, 4 Dec 2002, John Poltorak wrote: > ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.57.tar.gz > > > Should I expect this to work correctly on OS/2, or does it need to > amendments? At least its config.guess does have my patch applied, so in theory things should even work if you use a version of uname which reports a "real" version number like 4, 4.5, 4.51 or 4.52 instead of the hard coded "2". Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 6 ==========================** Date: Fri, 6 Dec 2002 00:44:01 +0100 (CET) From: Stefan Neis Subject: Re: configure - problems, Quoting On Tue, 3 Dec 2002, Ken Ames wrote: > This is to mere say that the full text quote helps developers keep > some important notes in mind from one post to the next Well, especially for the _appended_ full text quote I rather doubt that. Either you remember what the answer is about, then you don't read the quote at all, anyway, or you don't, so you read the message, ask yourself "what the hell is he talking about", start reading through the appended quotes and when you remember you can start reading at the top again ... A short, meaningful quote _at_the_top_ is so much more useful, IMHO. But I must admit, I meanwhile learned to find my way through almost all kinds of weird quoting, so to me, it's not very important anymore (especially since I'm no longer a modem user anymore since almost two weeks now ;-) ) Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 7 ==========================** Date: Fri, 6 Dec 2002 00:46:45 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Thu, 5 Dec 2002, Hakan wrote: > I need to take a look at the command-line version (version 3.65?) and > try it out myself. So far I have three different opinions from three > people... I'd suggest to just take David's word for it that I've been wastly wrong, so that might eliminate at least one of the opinions ... Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 8 ==========================** Date: Fri, 6 Dec 2002 01:05:58 +0100 (CET) From: Stefan Neis Subject: RE: wxWindows-2.3.4 On Thu, 5 Dec 2002, Hakan wrote: > * Do you know of a site where the different version of the gcc > compiler are compared in respect to conformance with Standard C++ as > well as in other respects? How does it compare to VAC++? I don't know about such comparision and the only other compiler I know is MS' Visual C++ (5 and 6). C++ support in gcc releases seems to be somewhat more complete than what Microsoft offers at the same time or shortly before, i.e. gcc-2.7/2.8 is better than VC++ 5 in that respect, gcc-2.95 beats VC++ 6, gcc-3.x is supposed to be more complete than MS' .net releases (but so far I neither used gcc-3.x nor VC .net). > * Does gcc use STLport? I understand that STLport falls short > of conformance -- what is your opinion (if you have used it)? IIRC, gcc is using the original STL library - with slight modifications prior to gcc-3 and supposedly in unaltered form since gcc-3.0 > * Am I correct in assuming that I do not have to program to the > EMX API provided I have no interested in porting my software to Unix? Well, depends on what you mean by EMX API. Many standard C calls like malloc, free, printf and so on will resolve to calls to something in the EMX runtime library (emxcs.dll or emxcm.dll), unless you're using a couple of special flags and can live with the (rather severe) restrictions imposed by them. But of course you aren't forced to use e.g. fork() or whatever else is EMX specific... Regards, Stefan -- Micro$oft is not an answer. It is a question. The answer is 'no'. **= Email 9 ==========================** Date: Fri, 6 Dec 2002 09:53:35 +0000 From: John Poltorak Subject: Re: Mailman getting close On Thu, Dec 05, 2002 at 06:09:37PM -0500, Ted Sikora wrote: > Ted Sikora wrote: > > > > I ran in bash: > > > > autoconf > > > > then removed from configure: > > > > + prefix) > > if (mode & S_ISGID) <> S_ISGID: > > problems.append("Set-gid bit must be set for directory: " > > > > then: > > ./configure --with-python=e:/apps/python222/python.exe > > --with-username=root --with-groupname=root --with-mail-gid=root > > --with-cgi-gid=root > > --prefix=/mailman > > > > It configured and built without errors but: > > > > Paths are wrong: > > > > The scripts use > > > > #!/usr/bin/env python My env is in /unixos2/usr/bin > > > > so I had to change the script or run python newlist > > > > and running ./newlist gets: > > > > Traceback (most recent call last): > > File "./newlist", line 53, in ? > > from Mailman import mm_cfg > > ImportError: No module named Mailman > > > > I think if we do some path finagling it may work like putting python is > > a Unix structure ie; /usr and I move my unixos2root to / > > > > But then maybe not. > > > > -- > > Can I get a copy of that patch for configure in the previous post? Do you mean the replacement of exit 1 by echo 1 ? That was simply a manual edit... I guess the best way of putting a fix in when you are doing repeated testing is to change CONFIGURE.IN. If your fix above works OK, then maybe remove those lines from configure.in and re-run autoconf. > -- > Ted -- John **= Email 10 ==========================** Date: Fri, 06 Dec 2002 12:18:51 -0800 From: Ken Ames Subject: Re: wxWindows-2.3.4 hi Dave, and being feature rich and widely used is exactly why I want to use it. thanks for all your help. Ken Dave Webster wrote: > >It has also been widely reviewed in the trade pubs and is noted for being >the most feature rich cross platform toolkit available on the planet, and >those are linked there as well. > >As for VA4 I will no longer comment on VA4. > >-----Original Message----- >From: Hakan [mailto:agents at meddatainc.com] >Sent: Wednesday, December 04, 2002 3:50 PM >To: os2-unix at eyup.org >Subject: RE: wxWindows-2.3.4 > > > > > > **= Email 11 ==========================** Date: Fri, 06 Dec 2002 12:34:03 -0800 From: Ken Ames Subject: got lost hi guys, my apologies to Dave Webster, Stefan Neis, and any others for not getting back to you sooner. I use mozilla mail and it got confused. wxOS2 is built using VAC 3.08! :) I now need to get the gcc OS2 port to build. Stefan, any suggestions? I am getting a really odd compile error, here it is; gcc -c -I./lib/wx/include/os2-2.3 -I../../include -I../../src/regex -I../../sr c/zlib -I../../src/png -I../../src/jpeg -I../../src/tiff -D__WXPM__ -O2 -MMD -O2 -m486 -Zmt -Wall -o inffast.o ../../src/zlib/inffast.c Abnormal program termination core dumped gmake: *** [lib/libwx_os2-2.3.a] Error 3 [X:\wxwindows-2.3.4\gcc-build\pm]gmake gmake: *** No rule to make target `../../src/os2Paccel.cpp', needed by `accel.o' . Stop. that is a stopped (from an error I am guessing) and restarted compile run. any insight would be appreciated, thanks Ken **= Email 12 ==========================** Date: Fri, 06 Dec 2002 13:15:31 +0100 (CET) From: "Christian Hennecke" Subject: Re: Installing autoconf On Thu, 05 Dec 2002 21:04:39 +0100, Andreas Buening wrote: >> I'm trying to install autoconf 2.53b release 2 with little success. >> Running configure results in the following error message: >> >> chmod: configure.lineno: Permission denied >> configure: error: cannot create configure.lineno; rerun with a POSIX >> shell >> >> This is independent of the shells I tried (pdksh 5.2.14 release 2, >> ash). Any ideas? > >What exactly did you do, what exactly is your configure output >and which sed and which chmod do you use? Meanwhile I installed all the latest stuff that is available at OS/2ports.com. The chmod error has gone now, but instead there are others. I did the following: - unzip the autoconf package to my e: drive where all the Unix-stuff resides - make sure install.exe is the correct one - start ksh in the autoconf directory - enter 'export ac_executable_extensions=".exe"' - enter './configure --prefix=e:/usr' The output is: [5]/autoconf-2.53b: ./configure --prefix=e:/usr ./configure[237]: sed: No such file or directory ./configure[687]: sed: No such file or directory ./configure[903]: sed: No such file or directory configure: creating cache /dev/null ./configure[1132]: sed: No such file or directory checking for a BSD-compatible install... e:/bin/install.exe -c checking whether build environment is sane... configure: error: ls -t appears to fail. Make sure there is not a broken alias in your environment configure: error: newly created file is older than distributed files! Check your system clock ./configure: sed: No such file or directory ./configure: sed: No such file or directory Christian Hennecke **= Email 13 ==========================** Date: Fri, 06 Dec 2002 13:26:19 -0800 From: Ken Ames Subject: Re: wxWindows-2.3.4 hi Dave, I am not too good on writing c++ so I doubt I could help you much at present. maybe in the future after the learning curve is past. Ken Dave Webster wrote: >I just need help finishing it! > >-----Original Message----- >From: Ken Ames [mailto:kenames at pacbell.net] >Sent: Friday, December 06, 2002 2:19 PM >To: os2-unix at eyup.org >Subject: Re: wxWindows-2.3.4 > > >hi Dave, > and being feature rich and widely used is exactly why I want to use >it. thanks for all your help. > >Ken > > > > **= Email 14 ==========================** Date: Fri, 6 Dec 2002 14:44:31 -0600 From: Dave Webster Subject: RE: wxWindows-2.3.4 I just need help finishing it! -----Original Message----- From: Ken Ames [mailto:kenames at pacbell.net] Sent: Friday, December 06, 2002 2:19 PM To: os2-unix at eyup.org Subject: Re: wxWindows-2.3.4 hi Dave, and being feature rich and widely used is exactly why I want to use it. thanks for all your help. Ken Dave Webster wrote: > >It has also been widely reviewed in the trade pubs and is noted for being >the most feature rich cross platform toolkit available on the planet, and >those are linked there as well. > >As for VA4 I will no longer comment on VA4. > >-----Original Message----- >From: Hakan [mailto:agents at meddatainc.com] >Sent: Wednesday, December 04, 2002 3:50 PM >To: os2-unix at eyup.org >Subject: RE: wxWindows-2.3.4 > > > > > > **= Email 15 ==========================** Date: Fri, 6 Dec 2002 16:29:21 +0000 From: John Poltorak Subject: Repository for results of Perl tests I would like to get to the bottom of all the remaining test fails from building Perl 5.8.0 some months ago. Can anyone offer a repository for storing results? They could be useful for anyone who wants to give it a try and doesn't know what to expect. -- John **= Email 16 ==========================** Date: Fri, 06 Dec 2002 19:07:29 -0500 From: Ted Sikora Subject: Re: Mailman getting close John Poltorak wrote: > > On Thu, Dec 05, 2002 at 06:09:37PM -0500, Ted Sikora wrote: > > Ted Sikora wrote: > > > I built it again the scripts run with ie; python newlist but their designed to use #!/ /usr/bin/env python The plain script fails with: ./newlist: import: command not found ./newlist: import: command not found ./newlist: from: command not found ./newlist: from: command not found ./newlist: PROGRAM: command not found ./newlist: SENDMAIL_ALIAS_TEMPLATE: command not found ./newlist: QMAIL_ALIAS_TEMPLATE: command not found ./newlist: STDOUTMSG: command not found ./newlist: ALIASTEMPLATE: command not found ./newlist: line 87: syntax error near unexpected token `string.lower(m' ./newlist: line 87: `style = string.lower(mm_cfg.MTA_ALIASES_STYLE)' Using env python newlist or taking /usr/bin from /usr/bin/env in the script gives: [netcast|d:/unixos2/home/mailman/bin]env python newlist Enter the name of the list: test Enter the email of the person running the list: ted at powerusersbbs.net Initial test password: Traceback (most recent call last): File "newlist", line 220, in ? main() File "newlist", line 169, in main mlist.Create(listname, owner_mail, pw) File "/unixos2/home/mailman/Mailman/MailList.py", line 786, in Create self.__lock.lock() File "/unixos2/home/mailman/Mailman/LockFile.py", line 219, in lock self.__write() File "/unixos2/home/mailman/Mailman/LockFile.py", line 350, in __write fp = open(self.__tmpfname, 'w') IOError: [Errno 2] No such file or directory: '/unixos2/home/mailman/locks/.lock.netcast.3607' Using just python newlist gives: [netcast|d:/unixos2/home/mailman/bin]python newlist Enter the name of the list: test Enter the email of the person running the list: ted at powerusersbbs.net Initial test password: Traceback (most recent call last): File "newlist", line 220, in ? main() File "newlist", line 169, in main mlist.Create(listname, owner_mail, pw) File "/unixos2/home/mailman/Mailman/MailList.py", line 786, in Create self.__lock.lock() File "/unixos2/home/mailman/Mailman/LockFile.py", line 219, in lock self.__write() File "/unixos2/home/mailman/Mailman/LockFile.py", line 350, in __write fp = open(self.__tmpfname, 'w') IOError: [Errno 2] No such file or directory: '/unixos2/home/mailman/locks/.lock.netcast.3580' Seems it cannot create a lockfile or find . python mmsitepass worked fine. -- Ted Sikora tsikora at ntplx.net **= Email 17 ==========================** Date: Fri, 06 Dec 2002 19:34:19 -0500 From: Ted Sikora Subject: Re: Mailman getting close This is a multi-part message in MIME format. --------------501DEC5C77AD9FC8DA91BFBC Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Ted Sikora wrote: > > John Poltorak wrote: > > > > On Thu, Dec 05, 2002 at 06:09:37PM -0500, Ted Sikora wrote: > > > Ted Sikora wrote: > > > > > > > Using just python newlist gives: > > [netcast|d:/unixos2/home/mailman/bin]python newlist > Enter the name of the list: test > Enter the email of the person running the list: ted at powerusersbbs.net > Initial test password: > Traceback (most recent call last): > File "newlist", line 220, in ? > main() > File "newlist", line 169, in main > mlist.Create(listname, owner_mail, pw) > File "/unixos2/home/mailman/Mailman/MailList.py", line 786, in Create > self.__lock.lock() > File "/unixos2/home/mailman/Mailman/LockFile.py", line 219, in lock > self.__write() > File "/unixos2/home/mailman/Mailman/LockFile.py", line 350, in __write > fp = open(self.__tmpfname, 'w') > IOError: [Errno 2] No such file or directory: > '/unixos2/home/mailman/locks/ >.lock.netcast.3580' > > Seems it cannot create a lockfile or find . > > python mmsitepass worked fine. > If we can make LockFile.py work were home free. Any ideas as to where to look? -- Ted Sikora tsikora at ntplx.net --------------501DEC5C77AD9FC8DA91BFBC Content-Type: text/plain; charset=us-ascii; name="LockFile.py" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="LockFile.py" # Copyright (C) 1998,1999,2000 by the Free Software Foundation, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. """Portable, NFS-safe file locking with timeouts. This code implements an NFS-safe file-based locking algorithm influenced by the GNU/Linux open(2) manpage, under the description of the O_EXCL option. From RH6.1: [...] O_EXCL is broken on NFS file systems, programs which rely on it for performing locking tasks will contain a race condition. The solution for performing atomic file locking using a lockfile is to create a unique file on the same fs (e.g., incorporating hostname and pid), use link(2) to make a link to the lockfile. If link() returns 0, the lock is successful. Otherwise, use stat(2) on the unique file to check if its link count has increased to 2, in which case the lock is also successful. The assumption made here is that there will be no `outside interference', e.g. no agent external to this code will have access to link() to the affected lock files. LockFile objects support lock-breaking so that you can't wedge a process forever. This is especially helpful in a web environment, but may not be appropriate for all applications. Locks have a `lifetime', which is the maximum length of time the process expects to retain the lock. It is important to pick a good number here because other processes will not break an existing lock until the expected lifetime has expired. Too long and other processes will hang; too short and you'll end up trampling on existing process locks -- and possibly corrupting data. In a distributed (NFS) environment, you also need to make sure that your clocks are properly synchronized. Locks can also log their state to a log file. When running under Mailman, the log file is placed in a Mailman-specific location, otherwise, the log file is called `LockFile.log' and placed in the temp directory (calculated from tempfile.mktemp()). """ # This code has undergone several revisions, with contributions from Barry # Warsaw, Thomas Wouters, Harald Meland, and John Viega. It should also work # well outside of Mailman so it could be used for other Python projects # requiring file locking. See the __main__ section at the bottom of the file # for unit testing. import os import socket import time import errno import random from stat import ST_NLINK, ST_MTIME # Units are floating-point seconds. DEFAULT_LOCK_LIFETIME = 15 # Allowable a bit of clock skew CLOCK_SLOP = 10 # Figure out what logfile to use. This is different depending on whether # we're running in a Mailman context or not. _logfile = None def _get_logfile(): global _logfile if _logfile is None: try: from Mailman.Logging.StampedLogger import StampedLogger _logfile = StampedLogger('locks') except ImportError: # not running inside Mailman import tempfile dir = os.path.split(tempfile.mktemp())[0] path = os.path.join(dir, 'LockFile.log') # open in line-buffered mode class SimpleUserFile: def __init__(self, path): self.__fp = open(path, 'a', 1) self.__prefix = '(%d) ' % os.getpid() def write(self, msg): now = '%.3f' % time.time() self.__fp.write(self.__prefix + now + ' ' + msg) _logfile = SimpleUserFile(path) return _logfile # Exceptions that can be raised by this module class LockError(Exception): """Base class for all exceptions in this module.""" class AlreadyLockedError(LockError): """An attempt is made to lock an already locked object.""" class NotLockedError(LockError): """An attempt is made to unlock an object that isn't locked.""" class TimeOutError(LockError): """The timeout interval elapsed before the lock succeeded.""" class LockFile: """A portable way to lock resources by way of the file system. This class supports the following methods: __init__(lockfile[, lifetime[, withlogging]]): Create the resource lock using lockfile as the global lock file. Each process laying claim to this resource lock will create their own temporary lock files based on the path specified by lockfile. Optional lifetime is the number of seconds the process expects to hold the lock. Optional withlogging, when true, turns on lockfile logging (see the module docstring for details). set_lifetime(lifetime): Set a new lock lifetime. This takes affect the next time the file is locked, but does not refresh a locked file. get_lifetime(): Return the lock's lifetime. refresh([newlifetime[, unconditionally]]): Refreshes the lifetime of a locked file. Use this if you realize that you need to keep a resource locked longer than you thought. With optional newlifetime, set the lock's lifetime. Raises NotLockedError if the lock is not set, unless optional unconditionally flag is set to true. lock([timeout]): Acquire the lock. This blocks until the lock is acquired unless optional timeout is greater than 0, in which case, a TimeOutError is raised when timeout number of seconds (or possibly more) expires without lock acquisition. Raises AlreadyLockedError if the lock is already set. unlock([unconditionally]): Relinquishes the lock. Raises a NotLockedError if the lock is not set, unless optional unconditionally is true. locked(): Return 1 if the lock is set, otherwise 0. To avoid race conditions, this refreshes the lock (on set locks). """ def __init__(self, lockfile, lifetime=DEFAULT_LOCK_LIFETIME, withlogging=0): """Create the resource lock using lockfile as the global lock file. Each process laying claim to this resource lock will create their own temporary lock files based on the path specified by lockfile. Optional lifetime is the number of seconds the process expects to hold the lock. Optional withlogging, when true, turns on lockfile logging (see the module docstring for details). """ self.__lockfile = lockfile self.__lifetime = lifetime self.__tmpfname = '%s.%s.%d' % ( lockfile, socket.gethostname(), os.getpid()) self.__withlogging = withlogging self.__logprefix = os.path.split(self.__lockfile)[1] def set_lifetime(self, lifetime): """Set a new lock lifetime. This takes affect the next time the file is locked, but does not refresh a locked file. """ self.__lifetime = lifetime def get_lifetime(self): """Return the lock's lifetime.""" return self.__lifetime def refresh(self, newlifetime=None, unconditionally=0): """Refreshes the lifetime of a locked file. Use this if you realize that you need to keep a resource locked longer than you thought. With optional newlifetime, set the lock's lifetime. Raises NotLockedError if the lock is not set, unless optional unconditionally flag is set to true. """ if newlifetime is not None: self.set_lifetime(newlifetime) # Do we have the lock? As a side effect, this refreshes the lock! if not self.locked() and not unconditionally: raise NotLockedError def lock(self, timeout=0): """Acquire the lock. This blocks until the lock is acquired unless optional timeout is greater than 0, in which case, a TimeOutError is raised when timeout number of seconds (or possibly more) expires without lock acquisition. Raises AlreadyLockedError if the lock is already set. """ if timeout: timeout_time = time.time() + timeout # Make sure my temp lockfile exists, and that its contents are # up-to-date (e.g. the temp file name, and the lock lifetime). self.__write() # TBD: This next call can fail with an EPERM. I have no idea why, but # I'm nervous about wrapping this in a try/except. It seems to be a # very rare occurence, only happens from cron, and (only?) on Solaris # 2.6. self.__touch() self.__writelog('laying claim') # for quieting the logging output loopcount = -1 while 1: loopcount = loopcount + 1 # Create the hard link and test for exactly 2 links to the file try: os.link(self.__tmpfname, self.__lockfile) # If we got here, we know we know we got the lock, and never # had it before, so we're done. Just touch it again for the # fun of it. self.__writelog('got the lock') self.__touch() break except OSError, e: # The link failed for some reason, possibly because someone # else already has the lock (i.e. we got an EEXIST), or for # some other bizarre reason. if e.errno == errno.ENOENT: # TBD: in some Linux environments, it is possible to get # an ENOENT, which is truly strange, because this means # that self.__tmpfname doesn't exist at the time of the # os.link(), but self.__write() is supposed to guarantee # that this happens! I don't honestly know why this # happens, but for now we just say we didn't acquire the # lock, and try again next time. pass elif e.errno <> errno.EEXIST: # Something very bizarre happened. Clean up our state and # pass the error on up. self.__writelog('unexpected link error: %s' % e) os.unlink(self.__tmpfname) raise elif self.__linkcount() <> 2: # Somebody's messin' with us! Log this, and try again # later. TBD: should we raise an exception? self.__writelog('unexpected linkcount: %d' % self.__linkcount()) elif self.__read() == self.__tmpfname: # It was us that already had the link. self.__writelog('already locked') raise AlreadyLockedError # otherwise, someone else has the lock pass # We did not acquire the lock, because someone else already has # it. Have we timed out in our quest for the lock? if timeout and timeout_time < time.time(): os.unlink(self.__tmpfname) self.__writelog('timed out') raise TimeOutError # Okay, we haven't timed out, but we didn't get the lock. Let's # find if the lock lifetime has expired. if time.time() > self.__releasetime() + CLOCK_SLOP: # Yes, so break the lock. self.__break() self.__writelog('lifetime has expired, breaking') # Okay, someone else has the lock, our claim hasn't timed out yet, # and the expected lock lifetime hasn't expired yet. So let's # wait a while for the owner of the lock to give it up. elif not loopcount % 100: self.__writelog('waiting for claim') self.__sleep() def unlock(self, unconditionally=0): """Unlock the lock. If we don't already own the lock (either because of unbalanced unlock calls, or because the lock was stolen out from under us), raise a NotLockedError, unless optional `unconditionally' is true. """ islocked = self.locked() if not islocked and not unconditionally: raise NotLockedError # If we owned the lock, remove the global file, relinquishing it. if islocked: try: os.unlink(self.__lockfile) except OSError, e: if e.errno <> errno.ENOENT: raise # Remove our tempfile try: os.unlink(self.__tmpfname) except OSError, e: if e.errno <> errno.ENOENT: raise self.__writelog('unlocked') def locked(self): """Returns 1 if we own the lock, 0 if we do not. Checking the status of the lockfile resets the lock's lifetime, which helps avoid race conditions during the lock status test. """ # Discourage breaking the lock for a while. try: self.__touch() except OSError, e: if e.errno == errno.EPERM: # We can't touch the file because we're not the owner. I # don't see how we can own the lock if we're not the owner. return 0 else: raise # TBD: can the link count ever be > 2? if self.__linkcount() <> 2: return 0 return self.__read() == self.__tmpfname def finalize(self): self.unlock(unconditionally=1) def __del__(self): self.finalize() # # Private interface # def __writelog(self, msg): if self.__withlogging: _get_logfile().write('%s %s\n' % (self.__logprefix, msg)) def __write(self): # Make sure it's group writable oldmask = os.umask(002) try: fp = open(self.__tmpfname, 'w') fp.write(self.__tmpfname) fp.close() finally: os.umask(oldmask) def __read(self): try: fp = open(self.__lockfile) filename = fp.read() fp.close() return filename except (OSError, IOError), e: if e.errno <> errno.ENOENT: raise return None def __touch(self, filename=None): t = time.time() + self.__lifetime try: # TBD: We probably don't need to modify atime, but this is easier. os.utime(filename or self.__tmpfname, (t, t)) except OSError, e: if e.errno <> errno.ENOENT: raise def __releasetime(self): try: return os.stat(self.__lockfile)[ST_MTIME] except OSError, e: if e.errno <> errno.ENOENT: raise return -1 def __linkcount(self): try: return os.stat(self.__lockfile)[ST_NLINK] except OSError, e: if e.errno <> errno.ENOENT: raise return -1 def __break(self): # First, touch the global lock file. This reduces but does not # eliminate the chance for a race condition during breaking. Two # processes could both pass the test for lock expiry in lock() before # one of them gets to touch the global lockfile. This shouldn't be # too bad because all they'll do in this function is wax the lock # files, not claim the lock, and we can be defensive for ENOENTs # here. # # Touching the lock could fail if the process breaking the lock and # the process that claimed the lock have different owners. We could # solve this by set-uid'ing the CGI and mail wrappers, but I don't # think it's that big a problem. try: self.__touch(self.__lockfile) except OSError, e: if e.errno <> errno.EPERM: raise # Get the name of the old winner's temp file. winner = self.__read() # Remove the global lockfile, which actually breaks the lock. try: os.unlink(self.__lockfile) except OSError, e: if e.errno <> errno.ENOENT: raise # Try to remove the old winner's temp file, since we're assuming the # winner process has hung or died. Don't worry too much if we can't # unlink their temp file -- this doesn't wreck the locking algorithm, # but will leave temp file turds laying around, a minor inconvenience. try: if winner: os.unlink(winner) except OSError, e: if e.errno <> errno.ENOENT: raise def __sleep(self): interval = random.random() * 2.0 + 0.01 time.sleep(interval) # Unit test framework def _dochild(): prefix = '[%d]' % os.getpid() # Create somewhere between 1 and 1000 locks lockfile = LockFile('/tmp/LockTest', withlogging=1, lifetime=120) # Use a lock lifetime of between 1 and 15 seconds. Under normal # situations, Mailman's usage patterns (untested) shouldn't be much longer # than this. workinterval = 5 * random.random() hitwait = 20 * random.random() print prefix, 'workinterval:', workinterval islocked = 0 t0 = 0 t1 = 0 t2 = 0 try: try: t0 = time.time() print prefix, 'acquiring...' lockfile.lock() print prefix, 'acquired...' islocked = 1 except TimeOutError: print prefix, 'timed out' else: t1 = time.time() print prefix, 'acquisition time:', t1-t0, 'seconds' time.sleep(workinterval) finally: if islocked: try: lockfile.unlock() t2 = time.time() print prefix, 'lock hold time:', t2-t1, 'seconds' except NotLockedError: print prefix, 'lock was broken' # wait for next web hit print prefix, 'webhit sleep:', hitwait time.sleep(hitwait) def _seed(): try: fp = open('/dev/random') d = fp.read(40) fp.close() except (IOError, OSError), e: if e.errno <> errno.ENOENT: raise import sha d = sha.new(`os.getpid()`+`time.time()`).hexdigest() random.seed(d) def _onetest(): loopcount = random.randint(1, 100) for i in range(loopcount): print 'Loop %d of %d' % (i+1, loopcount) pid = os.fork() if pid: # parent, wait for child to exit pid, status = os.waitpid(pid, 0) else: # child _seed() try: _dochild() except KeyboardInterrupt: pass os._exit(0) def _reap(kids): if not kids: return pid, status = os.waitpid(-1, os.WNOHANG) if pid <> 0: del kids[pid] def _test(numtests): kids = {} for i in range(numtests): pid = os.fork() if pid: # parent kids[pid] = pid else: # child _seed() try: _onetest() except KeyboardInterrupt: pass os._exit(0) # slightly randomize each kid's seed while kids: _reap(kids) if __name__ == '__main__': import sys import random _test(int(sys.argv[1])) --------------501DEC5C77AD9FC8DA91BFBC-- **= Email 18 ==========================** Date: Fri, 6 Dec 2002 20:30:28 +0000 From: John Poltorak Subject: Re: PKGINFO (was: installpkg) Here's a copy of a msg from Holger which went missing:- To pop up again after long silence, since I originally invented that format... On Tue, Nov 19, 2002 at 11:05:25AM +0100, Michael Zolk wrote: > On Sun, Nov 17, 2002 at 01:25:56AM +0100, Andreas Buening wrote: > > Some thoughts about your PKGINFO keywords. I encountered a lot of > > questions when I read your manpages. These reflect just my ideas and > > aren't well ordered: > > > > > > PKGNAME pkgname > > Where exactly is the PKGINFO file copied? > > /var/lib/unixos2/packages/.??? > > It's copied to /var/lib/unixos2/packages/pkgname. The idea is to have the pkgfile as a simple and readable reference for any software to deal with packages as well as for the user. So all packages ever installed are supposed to belong into a single directory. The above one looks useful. > > VERSION version-string > > Make it mandatory. It would cause the maintainers to use a version > > number. Installing an old version over a new one could cause an error. ACK. But... To add on this one: For simplicity of the whole system, the package is identified exclusively by its name. That is, in the UnixOS/2 system, there is one and only one package named GREP, even if the highend Unixers will howl and complain they need GNU grep as well as BSD grep as well as SYSV grep and all of them in at least 7 different versions. Should they manage their chaos if they like to, but we shouldn't mandate a chaotic system by differentiating between multiple versions. This results in a resolution for the numbering problem: > The problem here is that there are so many different formats the version > string can have so that I don't know how to compare version numbers. Debian > can do it somehow, but I don't know how. I have seen version strings like > "962", "0.1", "1.2PL2", "2.4.19", "20011018" etc. You simply do a string compare between the different version strings: 962 is NOT EQUAL to 1.2PL2, and 1.2PL2 is not EQUAL to 1.2PL3. It is impossible to build a time history of versions which would give us an ordering telling us that 962 is older than 1.2PL2 which is older than 1.2PL0 (for some obscure reason). The base system, in the style of a distribution, is supposed to have a consistent set of pkgs. *Usually*, it will not make too many problems to install a different GREP - newer or older - on top of such a base system. What you have to add as a real REQUIREMENT is the libc and other essential libraries the package relies on. If your PREINST script is a sh script, then there is a requirement to have a pkg SH installed (note: not BASH or ZSH or KSH, but explicitly SH is said here - don't make the install too complicated by inventing aliases either), but not SH-1.2.3.4PL27 (would 1.2.3.4PL29 work as well, would 1.130-27/11/2002 also do?). That is: If the package maintainer explicitly demands LIBC-1.2.3 then it must be installed, if it is just LIBC - he shouldn't add more versioning unless it is absolutely mandatory - then any pkg LIBC will do. > > What about a "TYPE {CORE|USR|LOCAL|OPT}" keyword that specifies > > which kind of package you have? USR, LOCAL or OPT packages could > > be less restrictive. E.g. non CORE packages may use sh scripts > > for PREINST script. Or removepkg could give an error if anybody > > tries to remove a CORE package. > > Yes, I think we need a way to specify the priority of a package, i.e. > if it's essential for UnixOS2, a pkg that most users would want to have or My 2c: Don't add artificial recommendations about priority. Make something like a set of "a"-series which you *MUST* have on the UNIXOS/2 system; these are the runtime libraries, a shell and the common Unix utils (ls,rm,mv, etc.), and make anything optional. Whether users want to have VI or EMACS, is a matter of taste. It at least doesn't have anything to do with USR, or LOCAL, or OPT locations. Thus we have two levels: CORE and NON-CORE. Something like OS/2 itself: the kernel and \OS2\BOOT and \OS2\DLL and a few more directories are CORE; whether you have CHESS or MAHJONGG on your system, is your decision (even if the OS/2 installer offers this as default), and thus NON-CORE. > rather a "special interest" pkg. installpkg does not handle these priorities > since it only unpacks a single pkg specified on the command line. Maybe we > can have a mire user friendly installation program that handles the overall > state of the distribution, i.e. fetches a list of available packages with > their priorities from a local file or the web site, makes sure that all the Priorities are a matter of taste. I don't need an MP3 player installed - others may not live without. Even a category "software development" does not mean that the installer should load LISP, APL and FORTH interpreters on the disk automatically. I continuously spend much time to "correct" the categories of so-called "intelligent" choices of Linux installers which fill the disk with useless bloat just because the maintainer of the category is a lover of certain software which noone else seriously uses but which he likes to promote this way. There is some tendency to append "useful" add-ons to categories, such as adding BISON and FLEX to GCC, because "serious" programmers supposedly will also include scanners and parsers in their C code. In other words: beware of "user friendlyness" by categories. If the user doesn't know BISON at all, and it is not necessary to install other packages (it hardly is), then refrain from the temptation to persuade the user he should install it. > essential pkgs are installed and checks if the dependencies for the pkgs > selected for installation are met. > > > DESC description > > Is this info printed? A message like "Do not install this package > > if ..." might be helpful. If this is the case then a comment that > > is not printed (REM?) might helpful, too. > > ? It is intended as a textual description of what the user will get if he selects to install it. This is not the place for "Do not install this package if...(you already have foobar installed)". The latter case is something to be checked by a preinst script. Although I advocated for the readability of the PKGINFO format, I believe that a REM "Don't Install this package" in the text will be largely ignored, and lead to endless discussions. If there are things that will likley prevent a package to run on an arbitrary system, like missing other packages, missing script interpreters or conflicting packages, or lack of disk space, it should be checked by explicit rules or the preinst script, if standard rules are too complicated. Holger -- Please update your tables to my new e-mail address: holger.veit$ais.fhg.de (replace the '$' with ' at ' -- spam-protection) **= Email 19 ==========================** Date: Fri, 06 Dec 2002 20:37:34 +0100 (CET) From: "Franz Bakan" Subject: Re: Installing autoconf On Fri, 06 Dec 2002 13:15:31 +0100 (CET), Christian Hennecke wrote: >On Thu, 05 Dec 2002 21:04:39 +0100, Andreas Buening wrote: > >>> I'm trying to install autoconf 2.53b release 2 with little success. >>> Running configure results in the following error message: >>> >>> chmod: configure.lineno: Permission denied >>> configure: error: cannot create configure.lineno; rerun with a POSIX >>> shell >>> >>> This is independent of the shells I tried (pdksh 5.2.14 release 2, >>> ash). Any ideas? >> >>What exactly did you do, what exactly is your configure output >>and which sed and which chmod do you use? > >Meanwhile I installed all the latest stuff that is available at >OS/2ports.com. The chmod error has gone now, but instead there are >others. > >I did the following: > >- unzip the autoconf package to my e: drive where all the Unix-stuff >resides >- make sure install.exe is the correct one >- start ksh in the autoconf directory >- enter 'export ac_executable_extensions=".exe"' >- enter './configure --prefix=e:/usr' > >The output is: > >[5]/autoconf-2.53b: ./configure --prefix=e:/usr >./configure[237]: sed: No such file or directory >./configure[687]: sed: No such file or directory >./configure[903]: sed: No such file or directory >configure: creating cache /dev/null >./configure[1132]: sed: No such file or directory >checking for a BSD-compatible install... e:/bin/install.exe -c >checking whether build environment is sane... configure: error: ls -t >appears to fail. Make sure there is not a broken >alias in your environment >configure: error: newly created file is older than distributed files! >Check your system clock >./configure: sed: No such file or directory >./configure: sed: No such file or directory > > > >Christian Hennecke > > **= Email 20 ==========================** Date: Fri, 6 Dec 2002 20:39:37 +0000 From: John Poltorak Subject: Re: installpkg More from Holger:- On Wed, Nov 20, 2002 at 02:08:50PM +0100, Michael Zolk wrote: > On Sun, Nov 17, 2002 at 01:25:56AM +0100, Andreas Buening wrote: > > > The idea of /bin, /lib and /sbin is that _all_ stuff that is > > necessary for the installation has to be there. If you really need > > another rexx script for the installation we must add it to /bin > > or /sbin. > > Yes. According to the FHS, such "internal binaries" should be placed in > /usr/lib/ (or /lib/ in this case), but this would require another > addition to PATH. So the right place would indeed be /bin or /sbin. Concerning scripts used by the installation, you have to distinguish between the PREINST and similar scripts and the package management utilities themselves. The package management is /bin or /sbin, the additional scripts are basically /var/tmp - scratch; they might be kept in some /var/adm/pkg sub structure. > [ install scripts that require additional packages ] > > > > This is indeed a problem. I don't know how all the Linux distros handle > > > this. > > > > Hmm, good question. Does anybody know? > > I know that the Debian policy requires that all programs that are marked > 'Essential' must work properly even without configuration. This is THE reason for REXX scripts in the main package management area, and for its preferred usage in PREINST. > > > This is the reason why all the scripts in ux2_base are Rexx scripts - > > > they only need the stuff that's already present on an OS/2 system. I'm not > > > sure if we should simply define that install scripts _have to be_ Rexx > > > scripts. > > > > No, not in general. My idea for the installation process is as follows: > > > > 0) - Change OS/2 variables (PATH, LIBPATH, ...) > > - Add Unix/Posix variables (HOME, LOGNAME, ...) > > - Add UnixOS/2 variables (UNIXROOT, TMPDIR) > > - Reboot if necessary > > All this is done by unzipping the ux2_base package and running doinst.cmd. Exactly. > > 1) installpkg puts the UnixOS/2 core packages (/bin and /lib) > > in place. > > All installation scripts and file lists are stored > > somewhere in /var without executing them. After this has been > > done we have all necessary installation tools available > > (That's the definition of /bin and /lib: necessary for installation > > and maintainance). PREINST scripts are supposed to run while installpkg is installing a package because they do certain tasks that are not preformed by the common process (copying files). They, for instance, add a new user 'postgres' into an existing /etc/passwd when you want to install that well-known database. Besides ux2_base which needs a reboot (to set the environment variables) there is no such thing as a "UnixOS/2 core package". If you mean shells or Unix utils (fielutil,textutil,shellutil, etc.), they are additional packages which might impose a certain sequence because of dependencies (but they should not have). But besides, they are not different from a compiler or emacs or tex package. This means "ux2_base" is special, and all other packages must be basically self contained. Otherwise the concept of packages is largely defeated - you could lump everything that resembles a Unix command into a big ZIP file and unpack it into a /bin,/usr tree and be done. > > 2) Run "ux2-update" the first time which a) executes those scripts > > and b) updates some databases/whatever. ux2-update will have a very complicated task to resolve obscure dependencies when run on a dircetory with a lot of scripts. > > 3) Now the base system is working and it's possible to install > > any number of other packages by installpkg and run "ux2-update" > > again. > > Hmm... the more I think about it... :) The only Linux distro that I *really* > know good enough, Debian, seems to do it in a similar way. First all packages > are unpacked, when the files are in place the packages are configured. Debian is well-known for the avalanche problem: you change some middle package and result in updates for almost the whole distribution. > It would only be necessary to delay the configuration for all the stuff > that's in /bin, for everything else ux2-update could be called directly by > installpkg. > > > Because of that all install scripts can rely on every tool that > > is in /bin. I.e. sh or even sed scripts are allowed but perl scripts > > are not (because perl is in /usr/bin). This conflict is unresolvable, as it in the long run will make every package essential: What happens if you like to install a package that has its documentation in texinfo or even in XML format? You end up in installing xml-tools and texinfo and maybe even TeX just for the purpose of getting a few files done. Perl is of course an "essential" package, but Python and Tcl and TK (needlessly bash + zsh + tcsh + ksh + ash and, and, and...) are as well. Where to stop? You might even encounter with deferred execution of ux2-update that due to some cyclic dependency (unavoidable!) you cannot install the whole system - the classical catch-22 situation. Thus, the reason for REXX as the installation language of choice - it is there, works, and is self-contained. Besides: the supposed shell scripts which will run during installation will have to be written as well from scratch, so why waste time for discussion on what package is essential or core or mandatory, or whatever one wants to call it - any set will result in some sort of restriction (in the above case you would exclude Perl - which makes XFree86-4.x unavailable for out-of-the-box install, in another case, you would exclude mailman - Python missing). > This set of "essential" packages should be kept as small as possible. It will unavoidably grow to the whole set of Unix tools which means we could forget about common package slices but will end up with a single large ZIP file named "The UnixOS/2 distro". > Furthermore, it's important to choose an install method that makes life > easier for package maintainers. It would be good to get some more input > on this issue. Rather than a list of "may use" software, specify that installation uses REXX only, and you have cut the whole discussion. > > REQUIRES pkgname > > libunixos2 is just a special lib. I suggest to use no REQUIRES > > keyword for any package that is in /bin or /lib. It's hard sometimes > > to find out which one of the GNU tools is used and which one not. > > If we use some kind of priority system for the packages, then I agree > that it's not necessary to list all of the core packages as required in > the PKGINFO files of all packages. It does not make sense to have more than a single "core" package - you need every file in each such package somewhere somehow. Rather, you would introduce obscure dependencies if you have, say, core packages containing "sed" and "grep" and "sh" and only replace a single of them - result might be that no depending script will run any longer. No: the core package is "essential" and contains matching versions of all the three utilities, and whenever someone coins a new version of one of the "essentials", a matching set of all them must be issued. (Note the example is weak, because typically these three utilities are not too intertwined but it explains the point). > > SCRIPT {PREINST|POSTINST|POSTDEL} scriptname > > At least the POSTINST scripts can be sh scripts. Where are these No. Simple case: what happens if you want to postinstall or deinstall ux2_base. You cannot rely on /bin/sh being around that time. So you have already one exception from the rule. How many more are maybe there? If /bin/sh is allowed, why not perl, tcl, python, lisp ...? It revolves around the basic idea: "behave always as if you have a naked OS/2 system" or in other words: "assume that your current installation of UnixOS/2 is incomplete or defective" - there must be a way to continue without removal of everything and starting over. > > scripts stored? How do you avoid name collisions? > > The scripts are copied to /var/lib/unixos2/scripts/. The scripts there > are named .postinst. The at the end is used so > that it's possible to have more than one script of the same type. And to add: Number is to be taken as a sequence number, i.e. .postinst1 is to be run after .postinst0. No particular sequence can be easily determined concerning and , though. Don't even attempt to consider some explicit or implicit ordering - catch 22 alert! > > Is it possible to execute a single command like > > "install-info --info-dir=/usr/share/info /usr/share/info/foo.info"? > > About every package requires a line like this for its .info file. > > At the moment only when you put it in an install script. I don't see this as a mandatory feature at the moment, because we can split a package into and and if we really want to give the user the opportunity to decide whether he wants to have on have not inforeader docs or man pages installed. > > FILE pathname [owner group mode] > > Adding hundreds of file to PKGINFO could be very time consuming. > > What about "FILE ALL" which installs all files automatically? > > It's good to have some redundant information like this in the PKGINFO file. > It could be used, among other things, to detect corrupted packages. Besides that, it later allows to build a dictionary to identify which file comes from where. > Of course it would be good to have a 'mkpkginfo' command that adds all the > files in a directory tree to a PKGINFO file. ACK. This was the intention. > > How do I specify "mode"? E.g. how to produce a readonly file? > > Access permissions are specified using the 3-digit octal notation known > from Unix. This would be 0444 then. Which will be mapped to 'attrib -r FILE' in the installpkg script, consequently. > > What about a "OVERWRITE {YES|NO}" keyword. If the file structure > > of a package changes between different releases then the new package > > gets "OVERWRITE NO" which means the old package must be uninstalled > > first. Even the "-f" switch won't install the new version over > > the old one (to avoid that files from the old version are left). > > Upgrading existing packages to a newer version is one of the areas that > needs some thoughts :) XFree86 (not the OS/2 version) has solved this by putting configuration files into a separate package which you may or may not want to install. This may not be useful in general. The question of such a keyword is what you want to achieve here. Suppose the situation where you want to restore a fresh copy for a corrupted file from a package. Say the "base" package contains templates for /etc/passwd and /etc/group. You have just corrupted /etc/group. Now what would you like to do: 1. reinstall "base" package with OVERWRITE NO on /etc/passwd and /etc/group and remain with your corrupted file 2. reinstall with some "-f" option and flatten your correct /etc/passwd as well? 3. remove "base" package first and lose /etc/passwd and /etc/group, and then reinstall both files from scratch. You doubt you really like any of these alternatives. You gave the answer to your own question already a paragraph below: What you want is a ".PRECIOUS" (in "make" speak) target: this is here called CONFFILE. > > CONFFILE pathname [owner group mode] > > What exactly is the difference between CONFFILE and FILE? > > Will the user be prompted for every CONFFILE as if it were readonly? > > That's the plan :) The keyword CONFFILE should be used for configuration > files that can be modified by the user. installpkg should avoid to simply > overwrite these files when possibly the user has invested lots of time to > fiddle with his configuration. ACK. To add on this: of course a POSTDEL script might fiddle with such a file, e.g. by removing a user 'postgres' from /etc/passwd when package 'postgres' (which is supposedly the only consumer for such an account) is deinstalled. And if you deinstall "base" package, it is expected to delete also the files it has created. The deinstall logic, however, has to respect the table of dependencies: don't allow deinstallation of "base" as long as any installed package still 'REQUIRES base'. "ux2_base" is implicitly required to every other package: another reason for Occam's razor not to multiply "core" packages. Holger -- Please update your tables to my new e-mail address: holger.veit$ais.fhg.de (replace the '$' with ' at ' -- spam-protection)