1994 TECHNICAL REPORTS University of California at Santa Cruz Baskin Center for Computer Engineering and Information Sciences Santa Cruz, California 95064 U.S.A. ELECTRONIC COPIES Most of these tech reports are available electronically as compressed postscript files - the filenames are listed next to the tech report numbers. Obtain these files either of the following ways: 1. through anonymous ftp from ftp.cse.ucsc.edu, in the directory pub/tr. Log in as "anonymous", use your email address as password, specify "binary" before getting compressed *.Z files: ftp% cd pub/tr ftp% binary ftp% get ucsc-crl-year-number.ps.Z ftp% get INDEX ftp% mget ABSTRACTS* ftp% quit Uncompress *.Z files before printing: % uncompress ucsc-crl-year-number.ps.Z Send the uncompressed .ps file to your local printer: % lpr -P(your_local_postscript_printer) ucsc-crl-year-number.ps (Sometimes a report consists of multiple .ps files in a tarred and compressed directory, in the form ucsc-crl-year-number.tar.Z. First uncompress the directory, then de-tar it (% tar xvf filename.tar), ' then print the .ps files separately.) 2. by mail to automatic mail server rnalib@ftp.cse.ucsc.edu. Put commands on the subject line or in the body of the message: @@ send ucsc-crl-year-number.ps.Z from tr @@ send INDEX from tr @@ send ABSTRACTS.1994 from tr @@ list tr (gives a list of the files in the tr directory) @@ help commands (gives a list of mail server commands) You may request more than one tech report at a time. The reports you receive back will be in multiple sections with directions at the beginning of each and an "advance" message saying that all the sections are coming. Save the sections to separate files, and then do a /bin/sh to each file in the proper order: % /bin/sh file1 % /bin/sh file2 etc. This should produce a file with the format ucsc-crl-year-number.ps.Z which you then uncompress and print as described above. PAPER COPIES Order paper copies from: Technical Report Library, Baskin Center for Computer Engineering & Information Sciences, UCSC, Santa Cruz CA 95064, U.S.A. Payment in advance is required. Purchase orders are not accepted. Checks or money orders must be for U.S. dollars, payable through a U.S. bank, and made out to "UC Regents". QUESTIONS: trs@cse.ucsc.edu *************** UCSC-CRL-94-01 (available electronically as ucsc-crl-94-01.ps.Z) CLASSIFYING NETWORKS: WHEN CAN TWO ANONYMOUS NETWORKS COMPUTE THE SAME VECTOR-VALUED FUNCTIONS? Nancy C. Norris March 1994, 108 pages (paper copy $14.00) Abstract: An "anonymous network" is a computer network in which all processors run the same algorithm during a computation. We will consider two classifications of anonymous networks: Networks will be called "f-equivalent" if the set of vector-valued functions each can anonymously compute is the same, and "p-equivalent" if the set of functions each can compute is the same, "up to a permutation" We first give a characteriz- ation of the vector-valued functions a given network can anonymously compute. This extends a result due to Yamashita and Kameda characterizing the scalar-valued functions a network can compute. Next, we develop algebraic and graph-theoretic techniques for handling edge-labeled digraphs. We will use these techniques, along with results from algebraic automata theory and permutation group theory, to derive a polynomial-time algorithm for determining whether two networks are f-equivalent. This will yield a polynomial-time algorithm for determining whether two edge-labeled digraphs have the same lattice of quotient-graph isomorphisms, and will let us conclude that classifying networks by what they can compute is easy. Classifying networks by "p-equivalence" on the other hand, is likely to be a much harder problem. We will present a polynomial-time transformation of the group-isomorphism problem to the problem of "p-equivalence" As of this writing, the best known algorithm solves group-isomorphism in order n to the log n steps, for a group of order n. *************** UCSC-CRL-94-02 (available electronically as ucsc-crl-94-02.ps.Z) MULTI-DIMENSIONAL TREES FOR CONTROLLED VOLUME RENDERING AND COMPRESSION Jane Wilhelms and Allen Van Gelder January 1994, 22 pages (paper copy $6.00) Abstract: This paper explores the use of multi-dimensional trees to provide spatial and temporal efficiencies in imaging large data sets. Each node of the tree contains a model of the data in terms of a fixed number of basis functions, a measure of the error in that model, and a measure of the importance of the data in the region covered by the node. A divide-and-conquer algorithm permits efficient computation of these quantities at all nodes of the tree. The flexible design permits various sets of basis functions, error criteria, and importance criteria to be implemented easily. Selective traversal of the tree provides images in acceptable time, by drawing nodes that cover a large volume as single objects when the approximation error and/or importance are low, and descending to finer detail otherwise. Trees over very large datasets can be pruned by the same criterion to provide data representations of acceptable size and accuracy. Compression and traversal are controlled by a user-defined combination of modeling error and data importance. For imaging decisions additional parameters are considered, including grid location, allowed time, and projected screen area. To analyse results, two evaluation metrics are used: the first compares the hierarchical model to actual data values, and the second compares the pixel values of images produced by different parameter settings. *************** UCSC-CRL-94-03 (available paper copy only) LOW-COMPLEXITY SUBBAND CODING FOR IMAGE COMPRESSION Don Speck (Ph.D. Dissertation Proposal) January 1994, 33 pages, $7.00 Abstract: Subband coding compresses grayscale still images of natural scenes by quantization and entropy coding of critically subsampled spatial frequency bands. A specially-designed multi-channel spectral analysis filterbank splits the bandwidth in such a way that a matched synthesis filterbank can exactly reconstruct the original image from the unquantized subbands and, from the quantized subbands, recover the image with distortion little or no worse than quantizing the image directly. With no change in the total sample count, compression to less than 1 bit/pixel comes from entropy coding of high-frequency channels, which quantize to almost all zeroes, due to the subband transformation packing most of the energy into the low-frequency channel and allowing quantization to be perceptually weighted by spatial frequency. FIR filterbanks are obvious candidates for parallel, pipelined VLSI implementation. Acceptance has been held back primarily by the computational expense of high-quality FIR filters and the lack of standardization. This Dissertation Proposal addresses both issues with a way of reducing the number of additions and multiplications for certain linear-phase exact-reconstruction two-channel filterbanks, constructed for this formulation gave better rate-distortion than any fo the Daubechies maximally-flat wavelet filters, yet the simplest requires only 6 additions and 2 shifts per pair of samples; including its modification at the image boundaries, its implementation in C is only 8 lines. *************** UCSC-CRL-94-08 (available electronically as ucsc-crl-94-08.ps.Z) REINAS: REAL-TIME ENVIRONMENTAL INFORMATION NETWORK AND ANALYSIS SYSTEM: PHASE III - SYSTEMS DESIGN P.E. Mantey, D.D.E. Long, J.J. Garcia-Luna, A.T. Pang, H.G. Kolsky (UCSC), B.R. Gritton (MBARI), W.A. Nuss (NPS) March 1994, 67 pages, $10.00 Abstract: REINAS is a continuing engineering research and development program with the goal of designing, developing and testing an operational prototype system for data acquisition, data management, and visualization. This system is to support the real-time utilization of advanced instrumentation in environmental science. Advances in continuous time measurements and improved spatial resolution allow the monitoring and understanding environmental phenomena in much greater detail than has previously been possible. The system is also designed to support the retrospective use of integrated environmental data sets. The project is a multi-year effort of the Baskin Center for Computer Engineering and Information Sciences of the University of California, Santa Cruz, in cooperation with environmental scientists from the Naval Postgraduate School (NPS), Monterey Bay Aquarium Research Institute (MBARI), and the Center for Ocean Analysis and Prediction (NOAA/COAP). This report documents the Third Phase of REINAS and describes the actual System Design of the project as it is being implemented. During the last half of 1993 the detailed requirements of the proposed users for the system were sharpened, selections of the technologies being used in the prototype system were made, and detailed architectures of REINAS and its subsystems for data collection, data management, processing, and visualization were worked out. The evaluations of the key components of the system is also a continuing task. Several new instruments and potential users not included in Phase II were studied. The objective of this document is to provide project participants and reviewers with the System Design of REINAS as it enters the prototype implementation phase. It spells out the technologies to be applied, and the plans for implementing the project goals. The REINAS system has been designed for regional real-time environmental monitoring and analysis. REINAS is a modern system, compatible with the Internet, for conducting interactive real-time coastal air/ocean science. The database design of REINAS is independent of specific database technology and is designed to support operational scientific needs throughput the entire scientific data life-cycle. *************** UCSC-CRL-94-09 (available electronically as ucsc-crl-94-09.ps.Z) TRANSIENT ANALYSIS OF COUPLED TRANSMISSION LINES USING SCATTERING PARAMETER BASED MACROMODEL Jimmy Shinn-Hwa Wang and Wayne Wei-Ming Dai April 1994, 27 pages, $6.00 Abstract: As the system clock speed increases, crosstalk becomes one of the major sources of noise which can limit the performance of high-speed digital systems in addition to delay and ringing. The congruence transform- ation can be used to decouple a coupled transmission-line system. In this paper, we extended the scattering parameter based macromodel to include the congruence transformation to analyze the crosstalk in the coupled transmission-line systems. After decouple the n coupled transmission lines using the congruence transformation, the computation of scattering matrix of the coupled transmission lines then becomes those of the scattering matrix of the congruence transformers and those of the n decoupled single transmission lines. A very simple scattering parameter description of the congruence transformer for coupled lossless transmission lines is derived. It has been extended to handle the coupled lossy transmission lines. Unlike some of the previous methods [14][16] which only consider the coupling between immediate adjacent lines, our method takes into account the coupling among all lines. We also present the results obtained from our simulator as well as those obtained from the SPICE-like simulators (for example: SPICE3e2 and ASTAP) and state- of-the-art moment-matching simulators (for example: SWEC and Coffee). *************** UCSC-CRL-94-10 (available electronically as ucsc-crl-94-10.ps.Z) A PATTERN-WEIGHT FORMULATION OF SEARCH KNOWLEDGE Robert Levinson and Gil Fuchs (supersedes ucsc-crl-91-15 and ucsc-crl-89-22) February 1994, 25 pages, $6.00 Abstract: Pattern-weight pairs (pws) are a new form of search and planning knowledge. Pws are predicates over states coupled with a least upper bound on the distance from any state satisfying that predicate to any goal state. The relationship of pws to more traditional forms of search knowledge is explored with emphasis on macros and subgoals. It is shown how pws may be used to overcome some of the difficulties associated with macro-tables and lead to shorter solution paths without replanning. An algorithm is given for converting a macro-table to a more powerful pw set. Superiority over the Squeeze algorithm is demonstrated. It is also shown how pws provide a mechanism for achieving dynamic subgoaling through the combination of knowledge from multiple alternative subgoal sequences. The flexibility and execution time reasoning provided by pws may have significant use in reactive domains. The main cost associated with pws is the cost of applying them at execution time. An associative retrieval algorithm is given that expedites this matching-evaluation process. *************** UCSC-CRL-94-14 (available electronically as ucsc-crl-94-14.ps.Z) THE APPLICATION OF STOCHASTIC CONTEXT-FREE GRAMMARS TO FOLDING, ALIGNING AND MODELING HOMOLOGOUS RNA SEQUENCES Y. Sakakibara, M. Brown, R. Hughey, I.S. Mian, K. Sjolander, R. Underwood, and D. Haussler November 1993, 37 pages, $7.00 Abstract: Stochastic context-free grammars (SCFGs) are applied to the problems of folding, aligning and modeling families of homologous RNA sequences. SCFGs capture the sequences' common primary and secondary structure and generalize the hidden Markov models (HMMs) used in related work on protein and DNA. The novel aspect of this work is that SCFG parameters are learned automatically from unaligned, unfolded training sequences. A generalization of the HMM forward-backward algorithm is introduced to do this. The new algorithm, Tree-Grammar EM, based on tree grammars and faster than the previously proposed SCFG inside-outside training algorithm, produced a model that we tested on the transfer RNA (tRNA) family. Results show that after having been trained on as few as 20 tRNA sequences from only two tRNA subfamilies (mitochondrial and cytoplasmic), the model can discern general tRNA from similar-length RNA sequences of other kinds, can find secondary structure of new tRNA sequences, and can produce multiple alignments of large sets of tRNA sequences. Our results suggest potential improvements in the alignments of the D- and T-domains in some mitochdondrial tRNAs that cannot be fitted into the canonical secondary structure. *************** UCSC-CRL-94-15 (available electronically as ucsc-crl-94-15.ps.Z) UDS: A UNIVERSAL DATA STRUCTURE Robert Levinson April 1994, 17 pages, (paper copy $5.00) Abstract: This paper gives a data structure (UDS) for supporting database retrieval, inference and machine learning that attempts to unify and extend previous work in relational databases, semantic networks, conceptual graphs, RETE, neural networks and case-based reasoning. Foundational to this view is that all data can be viewed as a primitive set of objects and mathematical relations (as sets of tuples) over those objects. The data is stored in three partially-ordered hierarchies: a node hierarchy, a relation hierarchy, and a conceptual graphs hierarchy. All three hierarchies can be stored as "levels" in the conceptual graphs hierarchy. These multiple hierarchies support multiple views of the data with advantages over any of the individual methods. In particular, conceptual graphs are stored in a relation-based compact form that facilitates matching. UDS is currently being implemented in the Peirce conceptual graphs workbench and is being used as a domain-independent monitor for state-space search domains at a level that is faster than previous implementations designed specifically for those domains.In addition it provides a useful environment for pattern-based machine learning. *************** UCSC-CRL-94-16 (available electronically as ucsc-crl-94-16.ps.Z) EXPONENTIATED GRADIENT VERSUS GRADIENT DESCENT FOR LINEAR PREDICTORS Jyrki Kivinen and Manfred Warmuth June 1994, 55 pages, (paper copy $9.00) Abstract: We consider two algorithm for on-line prediction based on a linear model. The algorithms are the well-known Gradient Descent (GD) algorithm and a new algorithm, which we call EG(+/-). They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG(+/-) algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG(+/-) and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG(+/-) has a much smaller loss if only a few components of the input are relevant for the predictions. We have performed experiments, which show that our worst-case upper bounds are quite tight already on simple artificial data. *************** UCSC-CRL-94-17 (available electronically as ucsc-crl-94-17.ps.Z) A POOR MAN'S WATCHPOINTS Max Copperman and Jeff Thomas (Supersedes ucsc-crl-93-12) April 1994, 14 pages, (Paper copy $5.00) Abstract: Bugs that result from corruption of program data can be very difficult to track down without specialized help from a debugger. If the debugger cannot help the user find the point at which data gets corrupted, the user may have a long iterative debugging task. If the debugger is able to stop execution of the program at the point where data gets corrupted, as with watchpoints (also known as data breakpoints), it may be a very simple task to find a data corruption bug. In this paper, we discuss a method of implementing watchpoints on a system without hardware watchpoint support. By instrumenting the program code to check memory accesses, and supplying an interface to the instrumentation in the debugger, we provide an efficient, general method of implementing watchpoints. *************** UCSC-CRL-94-18 (available electronically as ucsc-crl-94-18.ps.Z) A FIELD-PROGRAMMABLE PROTOTYPING BOARD: XC4000 BORG USER'S GUIDE Pak K. Chan April 1994, 105 pages, (Paper copy $14.00) Abstract: The XC4000 BORG board is a PC-based prototyping board with two "user" FPGAs, two "routing" FPGAs, and a fifth FPGA which implements the glue logic for the PC bus. The BORG board is a reusable educational tool intended for a variety of classes; the BORG board, its toolset, and the reprogrammability of the FPGAs further reduce the time/cost of constructing prototypes using FPGAs. This report documents the design, implementation, and the use of BORG: A Field-Programmable Protyping Board. *************** UCSC-CRL-94-19 (available electronically as ucsc-crl-94-19.ps.Z) DIRECT VOLUME RENDERING VIA 3D TEXTURES Orion Wilson, Allen Van Gelder, Jane Wilhelms June 1994, 11 pages, (Paper copy $5.00) Abstract: The advent of very fast texture mapping hardware in modern graphics workstations has warranted research into rendering techniques that use texture mapping to full advantage. We have developed a new and easy to implement method for direct volume rendering that produces high-quality images at speeds approaching two orders of magnitude faster than existing techniques, on workstations with hardware support for three-dimensional texture maps. A rectilinear data set is converted into a three-dimensional texture map containing color and opacity information. In the rendering phase, the texture map is then applied to a stack of parallel planes, which effectively cut the texture into many slices. The slices are composited to form an image of the original data set. This paper describes the theory and implementation of this technique. Keywords: Computer Graphics, Scientific Visualization, 3D Texture Mapping, Direct Volume Rendering. *************** UCSC-CRL-94-20 (available electronically as ucsc-crl-94-20.ps.Z) CARAFE USER'S MANUAL, Release Alpha.4 (Supersedes 90-61 and 92-33) Alvin Jee and Cyrus Bazeghi June 1994, 71 pages (paper copy $7.00) Abstract: This document describes the user interface for CARAFE, the second generation Inductive Fault Analysis (IFA) program. The syntax of all the commands and their parameters are all described in this document along with a description of the formats of the various files used and created by CARAFE. *************** UCSC-CRL-94-22 (available electronically as ucsc-crl-94-22.ps.Z) MORPH II: A UNIVERSAL AGENT: PROGRESS REPORT AND PROPOSAL Robert Levinson June 1994, 15 pages (paper copy $5.00) Abstract: This short report describes progress on the Morph Adaptive Pattern-Oriented chess system over the last four years and describes ongoing work on Morph II, a domain-independent machine learning system and problem-solver. Morph II, is a direct generalization of the original Morph model and incorporates significant improvements to all aspects of the original system. The Universal Agent (Morph II) falls directly within the hierarchical reinforcement learning paradigm that has been gaining popularity in the last few years, but goes beyond: in this learning model, the system is responsible for abstracting its own features and patterns, developing its own learning modules, and employing transformations on the representations used. In essence the system is learning neurally (through weight propagation), genetically (through pattern evolution) and symbolically (through its own abstractions). Still, the system maintains uniformity, in that each module at any level of the hierarchy is an instantiation of the same learning strategy. This report describes the motivations and philosophy behind MorphII, outlines current and future performance objectives, and discusses the potential impact of this new technology. *************** UCSC-CRL-94-23 (available electronically as ucsc-crl-94-23.ps.Z) STOCHASTIC CONTEXT-FREE GRAMMARS FOR MODELING THREE SPLICEOSOMAL SMALL NUCLEAR RIBONUCLEIC ACIDS Rebecca Underwood (Masters Thesis) June 1994, 75 pages (paper copy $11.00) Abstract: In this thesis, stochastic context-free grammars (SCFGs) are applied to the problems of folding, aligning and modeling families of homologous RNA sequences---specifically, the small nuclear RNA sequences of the spliceosome. SCFGs generalize the hidden Markov models (HMMs) used in related work on protein and DNA to capture the sequences' common primary and secondary structure. This thesis discusses a recently introduced algorithm, Tree-Grammar EM, for deducing SCFG parameters automatically from unaligned, unfolded training sequences [SBU+93, SBM+94, SBH+94a] and demonstrates its application to modeling three of the five prominent trans-acting factors in spliceosomal small nuclear RNA: U1, U5 and U6. Tree-Grammar EM, a generalization of the HMM forward-backward algorithm, is based on tree grammars and is faster than the previously proposed inside-outside SCFG training algorithm. Results show that after having been trained on as few as about 20 snRNA sequences, each of the three models can discern snRNA sequences from similar-length RNA sequences of other kinds, can find secondary structure of new snRNA sequences and can produce multiple alignments of snRNA sequences. Keywords: stochastic context-free grammar, small nuclear RNA, multiple alignment, database search, folding sequences *************** UCSC-CRL-94-24 (available electronically as ucsc-crl-94-24.ps.Z) USING MARKOV MODELS AND HIDDEN MARKOV MODELS TO FIND REPETITIVE EXTRAGENIC PALINDROMIC SEQUENCES IN ESCHERICHIA COLI Kevin Karplus July 1994, 28 pages (paper copy $6.00) Abstract: This paper presents a technique for using simple Markov models and hidden Markov models (HMMs) to search for interesting sequences in a database of DNA sequences. The models are used to create a cost map for each sequence in the database. These cost maps can be searched rapidly for subsequences that have significantly lower costs than a null model. Milosavljevic's algorithmic significance test is used to determine when a subsequence is significantly found. The sequences reported are trimmed to maximize the signal-to-noise ratio (cost savings / sqrt(length)). Methods are given for automatically constructing simple Markov models and hidden Markov models from small training sets. The techniques are illustrated by searching a database of E. coli genomic DNA, EcoSeq6, for clusters of Repetitive Extragenic Palindromic sequences (REPs). Of the known REPs, 91% are found with simple Markov models starting with a single REP cluster as a seed, and 95% are found by a hidden Markov model built from the results of the simple Markov model search. There are no false positives from the simple Markov models, and the few extra sequences found by the HMMs may be genuinely related sequences. Keywords: REPs, HMM, Markov model, hidden Markov model, E. coli, cost map *************** UCSC-CRL-94-25 (available electronically as ucsc-crl-94-25.ps.Z) UNSUPERVISED LEARNING OF DISTRIBUTIONS ON BINARY VECTORS USING TWO LAYER NETWORKS Yoav Freund and David Haussler June 1994, 41 pages (paper copy $8.00) Abstract: We present a distribution model for binary vectors, called the influence combination model and show how this model can be used as the basis for unsupervised learning algorithms for feature selection. The model can be represented by a particular type of Boltzmann machine with a bipartite graph structure that we call the combination machine. This machine is closely related to the Harmonium model defined by Smolensky. In the first part of the paper we analyze properties of this distribution representation scheme. We show that arbitrary distributions of binary vectors can be approximated by the combination model. We show how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits, and how the combination machine can be used as a mechanism for detecting these patterns. We compare the combination model with the mixture model and with principle component analysis. In the second part of the paper we present two algorithms for learning the combination model from examples. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters of the model. Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine. The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of projection pursuit density estimation. In the third part of the paper we give experimental results for these learning methods on synthetic data and on natural data of handwritten digit images. *************** UCSC-CRL-94-27 (available electronically as ucsc-crl-94-27.ps.Z) COMPUTER SCULPTING OF POLYGONAL MODELS USING VIRTUAL TOOLS James R. Bill and Suresh K. Lodha July 1994, 27 pages (paper copy $6.00) Abstract: This report describes a system for interactive manipulation of polygon meshes using virtual sculpting tools and mesh refinement operations. The virtual tools are geometric objects with shapes described by superquadric equations. The user controls a virtual tool to sculpt a free-form polygonal mesh model. The vertices of the mesh respond to the tool in a semi-realistic manner, allowing the user to push, pull, and deform the mesh in a variety of ways. Mesh refinement operations allow the user to control refinement levels and create smooth objects. Interactive shadows and a virtual trackball considerably enhance the intuitive manipulation and sculpting of models. The strength of the system has been demonstrated by sculpting free-form surfaces such as a dog, flower, television set, and dinosaur, all with widely varying levels of detail. Keywords: Computer graphics, interactive modeling, computer sculpting, virtual tools, polygon meshes, smoothing, subdivision, shadows. *************** UCSC-CRL-94-28 (available electronically as ucsc-crl-94-28.ps.Z) ON-LINE PREDICTION AND CONVERSION STRATEGIES N. Cesa-Bianchi, Y. Freund, D.P. Helmbold, and M. Warmuth August 1994, 35 pages (paper copy $7.00) Abstract: We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight beta^m where beta is a constant in [0,1) and m is the number of mistakes made by the expert in the past. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise. Keywords: Online learning, conversion strategies, noise robustness, binomial weights, exponential weights, weighted majority algorithm, expert advice, mistake bounds. *************** UCSC-CRL-94-2