DIGICAM: Digital Photography Makes Sense -- Finally

by Donald Jenner

Or is that, "making sense of digital photography, finally"? Digital photography appears to have matured rapidly during 2002. The technology is better: The cameras are neater and the firmware works better. Offerings from both traditional camera makers and from consumer electronics firms are substantial and puzzling. There is, literally, something for everyone out there; most of what's there is surprisingly good.

The last time CitiGraphics looked at digital cameras, the informed choice was, stick to film. A regular snapshot, scanned at fairly high resolution, was a better picture by far, than the best picture taken with a digital camera. Improved CCD's (the charge-coupled devices that actually make the picture) and better firmware make for images that are color-rich and of sufficient resolution that they can be managed pretty much the same way scanned photos are. Combined with improved printing options, the resulting all-digital photographic output can be distinguished from analogue, chemistry-based photography only after careful examination — maybe with a loop. Not the sort of thing one normally does.

Even pros dedicated to film, on both sides, think that digital, film-less photography is where things are headed. Observes one engineer, with a long connection to the great names in German cameras, "Your grandchildren will be amazed at the entire idea of photographic chemistry." That this view is widely held was evident at the 2002 PhotoPlus expo in New York. This show caters to professionals and high-end amateurs. The cameras being shown included film cameras from all the great names -- but all of them were featuring digital cameras.

Making Sense of It All

Perhaps the biggest problem for some purchasers is figuring out, what do you need to do the job? The language of megapixels doesn't really make sense to people who've thought mostly in terms of a piece of film. Nor is the relationship of megapixels to print-resolution usually presented clearly. A little arithmetic is a good thing. Rhona Gutierrez, a New York City designer and CitiGraphics' photography and Mac-products editor, who uses digital photography for professional work, summarized things this way:

"It all depends on how you plan on using your camera and photos. 1.x mp (megapixel) cameras are best for for basic usage, things like e-mailing photos, posting photos on the Web, and printing small, everyday shots. Think of these as the equivalent of point-and-shoot snapshot cameras. 2+ mp cameras are still for general use, but make bigger prints -- up to 5"x7". 3+ mp cameras are ideal for "average" photographers and high-quality general use. They are good for everyday and special occasions, and for printing a variety of sizes -- up to 8"x10" prints. 4+ mp cameras are for avid photographers who want near-35mm quality results and make oversize 11"x14" images." [Ms. Gutierrez's choice, by the way, is a Sony 3mp compact camera, which she uses for both general-purpose photography and for "studio shots" in catalogue design.]

Ms. Gutierrez is seconded by the companies that actually make the CCD silicon on which the technology rests. The general take is, buy a 2-something megapixel camera for most ordinary needs. Buy more megapixels based on need.

one-pea-two-pods

Leica's Digilux 1 and Panasonic's Lumix DMC-LC5: one pea in two different pods. But Panasonic sells a range of cameras with Leica lenses and similar features in lower pixel-depths at lower prices.

What Brands are Best?

Making a purchase is tough; the usual camera-brand options don't make a lot of sense.

Many brand-name digital cameras are actual private-branded versions of the same camera sold under other names. For example, the top-of-the-line Panasonic Lumix differs only in cosmetics from the Leica Digilux 1. Functionally, they are the same. On the other hand, Panasonic offer a full range of digital cameras, offering comparable features in lower pixel-depths combined with tuned Leica lenses at lower prices — a possible value for some buyers.

Olympus makes an interesting claim for its cameras: At the recent PhotoPlus Expo in New York, its reps cited a German study comparing up-scale "pro" digital cameras from famous-name makers with Olympus's own consumer-oriented line. The study (a copy of which we sought without success, however) suggests that the more aggressively priced Olympus cameras made better — sharper, &c. — pictures simply because the lenses were better tuned to the specific technology, than "retrofitted" film-camera lens designs used by other makers.

Dimage X

The Minolta Dimage X and Xi offer most features in a super-mini format. Always have your camera with you, ready to go.

Another element: Form factor. Bigger need not be better. Minolta's Dimage X and Xi mini-cameras offer the most important picture-taking features at high enough pixel-counts to be serious contenders, in a package that wouldn't necessarily make a bulge in a suit-coat pocket. While top-of-the-line "pro" cameras have the look and feel of older 35mm cameras, and offer special capabilities appropriate to professional photographers in studio or field sessions, enough of those features (even some special-effects features and optical zoom) are in the small-format cameras at lower prices, to make them interesting to all but the most dedicated gadget-freaks.

There are differences. Comparisons suggest that different makers have implemented different æsthetic values in the way the firmware (the camera's built-in software) manages the image. Some design-families (which stretch across brand-name lines, reflecting the real camera-maker's preferences, rather than the brander's preferences) have better user-engineering and -interface designs. These are more often matters of personal taste, than functional differences. Some may prove invisible even to discriminating users.

Some of the differences, brand-to-brand, are more important. The digital-media question is still not entirely resolved. Most makers use some version of the compact-flash card; some (notably Panasonic and Leica) use the smaller, newer secure-digital (SD) cards. Sony uses its proprietary memory-stick. Some makers offer rechargable batteries as part of the basic package; some offer a range of power options — long-life disposables, easy-to-find (but shorter life) alkaline and rechargable battery packs. Generally, the closer to "standards", the better the choice; it's not always easy to find a proprietary replacement for an unusual battery or memory medium in out-of-the-way places and digital photography is certainly not as ubiquitous as film.

How good are the pictures?

The key question is, are the pictures good enough that a digital camera, costing substantially more than a comparable film camera, makes sense?

The answer appears to be a resounding "yes", for any camera sporting two megapixels or better. We took pictures using an Olympus D-520 digital camera, comparing them to pictures taken with the same kind of 35mm camera lots of people use for practical photography, then scanned for use in computer-composed jobs. There was no difference, in any practical way we could find. Color values were similar. Adjusting the images in various ways was no different. Images were equally sharp. The results, composed in the computer and output to PDF for reproduction by a commercial printer, were no different. Even the time element was about the same. [The added convenience of uploading already-digital images to the computer rather than scanning was offset by the need to rename all the pictures in suitably mnemonic ways.]

olympus

We used an Olympus 2mp D-520 for our tests and in general use. It proves ideal for most uses, and is small enough it's always with one of us.

Prices

Good news here: Prices are falling. The last round of deep-discounting (a result of Polaroid and others cleaning out the warehouses) was a bit disappointing. The cameras were not up to comparison with film cameras, probably due to older technology. Recently, the clean-out-the-warehouse gambit has put very nice point-and-shoot brand-name digital cameras in lower pixel-depths (commonly, not with optical-zoom and other fancy features) in deep-discount stores for under $100. Even in the last couple months, better-quality cameras have dropped at the tier-one retail shops by eight or ten percent. A digital camera selling for $300-something in August can very likely be had for around $275 today. New models should be showing up with some frequency; with a tough economy still on (and the fact that Japan, particularly, is banking on an export-led recovery) should keep downward pressure on prices.

The newer models sometimes sport improved feature-sets, rarely dramatically better technology. This means it's probably time for the hold-out folks to go buy an entry-level digital camera and learn the virtues and limits of the new technology. Those who hit it right with the first-time purchase are all set. Those who want more capability or features can buy the higher-priced feature-rich model with greater confidence. Best of all, the try-it camera becomes a good backup.

Return to Top of Page

Visual Communicator: TV Studio for the Rest of Us

Finally! Serious Magic's Visual Communicator is a production tool for ordinary folks who want to merge graphics and video in more effective presentations. It is priced to sell, and it works remarkably well — better than promised by the company. The resulting presentations are top-notch.

{short description of image}
The Visual Communicator scripting window puts all the tools where you need them. Top-right: You compose the script and the various visual elements. Top-left: You see what's happening. Lower-left: various controls on different "cards". Lower-right: directory access to effects, sounds and so on,

Visual Communicator comes in two forms. The basic product supplies the CD with the software and a readable, useful manual; it costs $100. For $50 more, the company packs in a lapel microphone (the kind you see on TV anchor persons) and a large rectangle of green vinyl for use as a backdrop, when using the program's ability to substute virtual backgrounds ("chroma-key").

Setup is easy and largely automatic. The first thing Visual Communicator does at its initial launch is check for updates. Normally, autoupdating is not a favorite feature around here. In this case, it was worthwhile; the company made substantial improvments to large-file handling. In short, register and do the updates.

Using the product is also straightforward. There is merit to reading the manual; there are some things — mostly design issues — to think about and the short manual lays them out effectively. Run some of the demos for the same reason — to get the feel of things. Expect no problems; the learning curve is blessédly short — mainly because Serious Magic planned this to be something ordinary folks could use in a natural way.

That process is simple enough: You write a script in the teleprompter window. You add in appropriate content — drag in an image or other element. Visual Communicator creates a "tray" for this element, with a default video effect to manage the transition from the previous state. Drag in a new effect according to taste. We found it easiest to have the effects listing open in the Visual Communicator directory window, with a listing of prepared images showing in a standard Windows Explorer window.

Content includes just about anything that can be rendered in JPEG format; that means older presentation graphics from, e. g., Powerpoint, can be recycled (Powerpoint includes an option for export-to-JPEG). Custom graphics from a wide variety of standard PC graphics programs work well; we tested with images from Corel Draw and PhotoPaint. If that's not easy enough, Visual Communicator includes a title-making program that's just dandy for doing bulleted lists and other simple slides.

Video content can be added from just about any source that connect to Windows. We used a very simple web-cam; hardly great video, but cheap and easy to use. Better quality cameras provide better video, obviously. Video — preexisting or to be captured as part of the final recording — is added just like still content: Drag it in, and assign an effect. The same applies to audio elements.

A series of tabbed "cards" in the lower-left of the Visual Communicator window controls a number of things. The most important turn out to be controls for the scrolling speed for the "teleprompter" and the rate at which the various effects take place. We found it useful to slow things down a lot in the teleprompter window. We found that we tweaked the transitions in the effects control rather carefully as we worked through the rehearsal.

That rehearsal phase is important. Click the button under the video-recording window and the system starts scrolling. You can stop the rehearsal at any point in the script; you can start it at any point in the script and this is a good thing, as it allows for fine-tuning timing elements. This fine-tuning process took the longest to get the hang of; the results were generally worthwhile.

Script written, various content elements with related effects in place and the whole thing rehearsed, you press the record button and Visual Communicator starts up. 4-3-2-1 and bingo! you're live on camera. Finish the recording and play it back to see if you like the result. If you like it, you can render it to different output formats. Windows AVI is the most complete and the most share-able; Windows ASF is more practical, and the system allows for rendering the recorded video in different frame-rates and so on, resulting in different size files for different forms of distribution — use on web-sites, e-mail content and so on.

Video is not particularly easy on computer resources. It takes more memory, both live, volatile RAM and permanent storage, than other kinds of presentation graphics. The reason is simple: A video comprises around 30 pictures (frames) per second, to which additional audio data has to be added. When the live-memory buffers are filled, Visual Communicator has to write to disk. This is a constraint on any video or high-end graphics program; the problem with video is, you will get some glitches during the brief interval when that write-to-disk takes place. In editing suites, you can sometimes just cut out the problem and work around it; Visual Communicator avoids that level of complexity. [Even with that facility, most systems used for work with video have extra RAM and large, fast hard disks that are kept "on" to avoid spin-up issues.]

In the tests we did — using a system with less memory than Serious Magic recommends — we found that this problem could be managed by carefully noting when in the planned script the system was likely to have to write, and building in a pause at that point. The pause was covered with a still image, with no video or audio activity. This worked fairly well; where it didn't, there was only a brief lost of sync in the "live" video element right then, which corrected itself in the cycle between video-captures and still-image. The results were within tolerable limits, and a system with more memory and still faster hard disks would have done better. Even so, we got some improvement by quickly defragmenting the hard disk being used for recording.

A particularly nice surprise: By tweaking the "custom" settings a bit, we could get smaller "low-fi" .ASF files that rivaled the better-quality, and substantially larger files. For websites and e-mail transmission, getting comparable results in a file one-third the size is not a small matter.

If you haven't done defragmentation for awhile, on the grounds that the standard Windows utility (based on antique software from Symantec, as I recall) is just too slow, run-do-not-walk to your local software outlet and buy Diskeeper from Executive Software. This is serious software — fast and accurate. Works like a charm on in both NT/2000 and 95/98/ME versions. Best of all, it works in the background. This is important: Diskeeper can be set to load automatically and run on a schedule. The result is a system that's always "in tune". Executive Software, the producer of Diskeeper, offers a complete line of top-notch system-maintenance utilities for Windows; take a look at them at www.executive.com.

Two concerns: Serious Magic supplies and automatically installs the latest versions of Windows Media Player and Internet Explorer. We prefer not to use IE6 because of known security issues; it might be good to immediately revert to an older version — one for which third-party (not Microsoft...) privacy and other security fixes are available. It is also possible to revert to the older version of Windows Media Player, and it seems to work just fine with Visual Communicator.

If you do decide to use IE6, take a look at Imagine LAN's PI Protector. This program moves all the privacy-violation stuff (cookies, &c.) to a flash-RAM USB device (e. g. DiskOnKey or CyberKey). Tracking is redirected to something that is inherently not trackable.www.imaginelan.com

The other issue: This software uses Microsoft's DirectX; older display and sound system drivers may need updating, to support more recent DirectX software; we found it necessary to update display, camera and sound drivers. Updating drivers is never a bad thing. The DirectX diagnostic tool, DXDIAG.EXE (look in the Windows System directory), provides a report suggesting such driver updates. Serious Magic has the right kind of tech-support, if things get tough — the kind that can answer questions quickly and efficiently.

So what's the story? Visual Communicator adds a video component to just about everyone's presentation-support package. It has several possible applications. Right out of the box, it can produce effective content for company web-sites. With care and attention to detail, it be used to enhance presentations via e-mail and other place where getting the message across is better accomplished if the person on the receiving end can actually see the gleam in the presenter's eye. I suspect it could also be used to create content for use with more advanced video-editing suites. That's amazing value. Also, a lot of fun.

[Check it out. See some samples at the Serious Magic website.]

Return to Top of Page

CYCAS CAD on Linux: Prince in a Frog Suit?

CYCAS is CAD for architects. It runs under the Linux operating system. Choosing this software is therefore somewhat more complex than choosing some other draughting program.

OOPS! A Windows version is now available. Look for our upcoming revised review of this design product.

CYCAS is an interesting piece of software. The country of origin is Germany, and it appears that the programmer originally produced the software to run under Unix on the late and somewhat-to-be-lamented Amiga. The publisher seems to have started out in another business as well; she sports the title "Diplom Ingenieur". The manual and software are supplied in a slipcased ring-binder. There is a feeling of days gone by, in short; I haven't seen seriously commercial software with an instant-print manual in a ring-binder in a long time.

The manual is supplied in English translation (a good thing; my German has long since passed beyond being merely rusty); it still feels as if it were written in German. This is not entirely a bad thing; the manual is relatively clear and provides a reasonable beginning point for learning the software. The software itself is supplied on CD. A puzzle: The publisher's website (www.cycas.de) says that CYCAS for Linux runs on Red Hat and Suse versions. The installation instructions for the software don't seem to be that fussy. I tested the software with the Corel implementation of Debian Linux as well, and encountered no installation problems.

The CYCAS workspace is well laid out and easy on the eyes. The right-hand control panel lists the kinds of elements available in the top button columns. Select an element, and the rest of the control panel shifts to the relevant commands controlling the creation of that element. Along the bottom, CYCAS locates more general control functions and inputs. The zoom controls and the toggle controlling snap-to input are along here. An input window accepts measurements, or signals expected input.

{short description of image}
Proof is in the pix: 2D plot & 3D
rendering in CYCAS CAD under Windows
{short description of image}

In fact, despite the somewhat amateurish appearance of the manual, CYCAS appears to be a very solid, very stable and elegantly written program. It runs equally well under GNOME and KDE desktop interfaces (these are the two most important user-interfaces to the X-Windows system used with Linux). The overall appearance of the working environment is remarkably clean and logical.

For example, to input a wall, first choose the "wall" element, set the wall's width and toggle the [draw a] "wall" command from the list of wall-element options. Click in the drawing area to set the origin of the wall; the input area requests a length. Add the length and press enter; the machine puts up a wall. Other architectural elements are as easily inserted and connected and trimmed up. Do the design, then do the rendering.

CYCAS's 3D capabilities show its graphics-computer (AMIGA) heritage. Buildings do not exist in two dimensions, and architectural software that more or less automatically understands how to display an evolving design in three dimensions is a more able tool for architects.

CYCAS has very nice 3D capability. Set heights and so on as required and this information is automatically incorporated into the drawing. Lay out the design in two dimenstions; shift to 3D mode and view from several different viewpoints, perspectively or otherwise. The software can create fully-rendered images. 3D rendering uses POV-Ray, but the company says that ray-tracing support for other rendering engines is built in as well. The software saves in a variety of standard formats (2D/3D DXF, EPS, Lightwave, Real3D); it was not immediately evident how this is done. CYCAS comes with a generous library of predefined symbols. They are easily accessed and used, and the software fully supports new-symbol creation.

Put it shortly: CYCAS is smart, easy-to-use and well thought through software for architects. The tutorial program is an adequate beginning, from which to continue on one's own. The clear user interface (which the company says can be modified to suit a user's special needs or preferences - this was not tested, as the default seemed close to perfect) is no small part of what seems likely to be a very short learning curve for this software. Just about anything an architect wants to do, should be do-able in the CYCAS environment, from basic design, to site layouts to client presentation. CYCAS has all the versatility wanted in good CAD software, well-tuned to the special needs of architects.

Sadly, to use CYCAS, you have to use Linux. Linux is simply too much a specialist's working environment, as it has developed, to become anything like a mainstream working environment. There are reasons for the general popularity of the Microsoft Windows environment; it is not all marketing hype. Most people do more than one thing on their desktop and other personal computers, and there are more and better ways to do that under Windows than under Linux, generally speaking. Those tools which - in many cases, for the very simple reason of best-in-class - have become standards throughout a large part of the computer-using universe are simply not available for Linux. The alternatives are not bad (e. g., StarOffice is a good alternative to Microsoft Office) - but they have not gained ubiquity, and the ability to pass files back and forth through filters is not a substitute.

Moreover, the Linux world is like the Unix world in far too many ways. Like Unix (in days gone by, anyway), Linux comes in lots of flavors, and they vary enough to be troublesome. [CYCAS, by the bye, seems to be more tolerant than some Linux applications - a very real kudos to its programming team.] Like Unix, "open standard" has come to mean "having so many options that there is no standard". Like Unix, the part of Linux users see is the many bits and pieces of software, not generally well-integrated and even more poorly documented than was ever the case for Unix. Like Unix, there are few easy ways to integrate a Linux system into a more standard-brand Windows or Mac network. [Corel Corp.'s foray into the Linux world featured a browser that did look at the larger network, and managed the integration almost seamlessly. Sadly, the implementation was seriously flawed in other ways and has, in any case, apparently been abandoned.]

In a word, Linux, like Unix before it, is user-hostile. Heaven forfend, one should wish to merely use the computer without having to go back to school to become an information-technology professional! Making a buying choice in favor of CYCAS means accepting the notion of Linux as the "shop standard" - and probably employing a Linux-specialist to keep the systems up and so on. That is not a common choice

Return to Top of Page

Never Too Rich, Never Too Thin
Thin is in — if you can get past the sticker shock...

CRT technology is better and cheaper than ever. So, is there a case for these very expensive LCD display panels. Maybe.

Most flat-panel displays use LCD technology. LCD limitations have encouraged research into other technologies. So far, these technologies have not made it to the mass-market for desktop computer displays. LCD makers such as Samsung (about 60 percent of the market) and Mitsubishi supply enhanced-capability LCD panels, improving detail by increasing image-element density, or offering wider angled viewing.

Displays using these new panels are remarkably sharp, and the color fidelity is stunning. That fidelity is more than sufficient for many kinds of design and for working with photo-editing. Power consumption is low; size is easily constrained.

{short description of image}
Altizen's 17" LCD Monitor (see below)

So, why aren’t these being sold in large numbers, with attendant drops in cost?

According to the OEMs, high cost is tied directly to relatively low manufacturing yields. Especially in the preferred, “active matrix” designs, the number of panels discarded on QA grounds is large enough — the number given out is around 20 percent — that the unit cost is seriously affected. The larger the panel, the greater the problem.

An alternative explanation: Monitor-makers are delighted to keep prices — and margins — high, avoiding the low-margin problems of other electronics makers.

So, LCD flat-panel monitors are not cheap. A 15” monitor costs between $850 and $1,000 (or more…). A 17” monitor will cost over $2,000; 20” models can be had at prices rivaling New York rents in fashionable parts of town. Should you consider spending this kind of premium for a display?

Maybe. Consider the cost of real estate: A flat-panel display – inherently more ergonomically acceptable – fits nicely in a standard 36-40 square foot cubicle. A large-screen flat-panel is not particularly more demanding of desktop real estate than a smaller model. This is not true of large-screen CRT-based monitors, where even “slim” models require depth equivalent to the screen size – the bigger the monitor, the deeper the desk, among other things. On brokerage and exchange trading floors, this is common wisdom; flat-panels are the display of choice in these crowded spaces. Save six square feet, at $25 or more a foot per month, and a flat-panel monitor easily pays for itself.

Add, apparent resolution is better on flat-panel displays. A 15-inch flat-panel display appears to yield the same ease of viewing as a 17-inch CRT-based monitor. [The downside: 15” flat panels are generally limited to 1024x768 pixel resolution — one needs more screen space than that, increasingly.] An 18-inch flat-panel is almost as good as having a 20-inch CRT (if 1280x1024 pixel resolution will suffice). And so on. This can translate into happier, more productive designers and draughtspersons.

These devices look better, especially if the back is going to be seen. The best of the breed include a USB hub; couple that with a new-style system with new-style keyboard and mouse (or tablet), and a sealed, no-add-ins box, and the system is simply going to be easier to deal with — both aesthetically and technically.

Finally, the newest designs accommodate both analogue — legacy — input and digital input — the latest and greatest. Use the monitor with your system today and expect it to survive the next system-unit upgrade.

So, what can you buy, today?

15-inch LCD flat-panel models with an effective display area of about 14 to 14.5 inches, supporting 1024x768 pixel resolution are becoming commonplace. Though these displays are smaller than the 17-inch to 19-inch displays commonplace for graphics users, type is crisp. Run the system with “small fonts” at maximum resolution and the results are still entirely readable. Part of the proof of this: Notebook computers with somewhat smaller screens are being deployed as desktop-replacements. Even bifocal-wearers like me can use these panels without much trouble. Most of the major brands have offerings in this size range. They tend to be older models, using less advanced technology, and handle analogue output only.

Design professionals really need larger screens, both for long stints working on details and for collaborative sessions.

Pride of place in this class goes to Silicon Graphics. The company’s digital-direct 1600SW display delivers a wide-screen 1600x1024 pixel resolution on a surface 17.3 inches on the diagonal. This digital-only monitor is supplied with display hardware from 3DLabs. [SGI offers an added-cost analogue/digital converter for folks who need to accommodate legacy graphics hardware.] In short, this is a complete display system, carrying a price tag of $2,895. It couples with both PC-family machines under Windows 95/98/2000 and Macintosh systems.

Viewsonic’s VG180 is typical of more conventionally designed analogue LCD monitors. With a diagonal measurement of 18 inches, this monitor delivers up to 1280x1024 pixel resolution, and works equally well with PCs and Macs. Other firms offering panels in this category include NEC/Mitsubishi (still honeymooning in their joint marketing operation), Samsung and Sceptre. IBM, not short on the innovation front, offers a 16.1-inch diagonal display in the “SXGA” format (another name for 1280x1024 pixel resolution…). All these companies are now introducing analogue/digital dual-capability models.

A new player on the scene, Altizen, merits special attention. This is a new brand coming from Korea, where the company claims close relations with panel-maker Samsung. The company’s lineup consists of an older-tech 15” monitor, and new-tech 17” and 18” models – the latter two analogue/digital dual-capability models. Sporting an 80-degree viewing angle, the large Altizen monitors deliver a very crisp 1280x1024 image. Altizen makes its middle model really very attractive — same pixel resolution and not a great deal smaller, for $575 less than the big one. All the monitors are well designed, with attention to details like cable management (what a concept!); the new-tech larger models include a four-port USB hub for desktop peripherals — keyboard, mouse or tablet, a video camera.

The name of the game is space on the desk, as much as on the screen desktop. LCD monitors are not cheap, but are cost-effective in lots of places. They look good, and the picture on the screen is probably better, all things considered.

Return to Top of Page

ARCHIVES

CAD & Clothing — Engineering your new duds

By comparison, architectural and engineering applications of CAD look simple. Textile and apparel design processes are not only complex, but the style cycle is short. Twice a year or more, clothing designers have to have something new, and it has to look new. Variations on the CAD theme make more responsive design cycles.

Old main-line retail chains have been doing this for generations. Sears and J. C. Penney have long bought the entire output of some clothing manufacturers, made up to retailer specifications. Both companies have even offered (very quietly) made-to-measure features in their own-brand products. [For example, J. C. Penney would run up special orders of Towncraft shirts in odd-size sleeve-and-neck combinations; minimum order: four shirts. Seears Roebuck had a made-to-measure clothing department in selected stores.] Large retailers have long understood the advantage of vertically integrated clothing business.

Modern apparel design is tremendously competitive – there are many designers out there. It’s not just independent design shops; every serious retail chain has its own design teams. Consider the GAP: Originally, the company sold Levi-Strauss jeans; it made itself over into a brand-name jeans maker competing with Levi-Strauss – then cloned that experience with another competing line, Old Navy.

Users of clothing & textile CAD
Who uses this technology? Well...

The complexity has increased, as manufacturing has moved off-shore. A design originating in North America or Europe is more than likely to be manufactured under contract in a country where even shop management has only limited command of the language of the designer. Precisely detailed drawings, patterns and samples are essential; they insure that the job is done right the first time.

The time pressures are not insignificant. The fashion-dominant crowd is teenage – Generation 4. Shifts in taste can happen monthly. Major shifts are clearly visible in the course of a year (a year ago, my students still wore Tommy Hilfiger and red/white/blue – that was before folks decided they didn’t like being bad-mouthed; a year ago DKNY was still a presence in my part of the City University -- now, just a memory).

A decade ago – less – things were fragmented. Major design houses such as Liz Claiborne knew they wanted CAD technology, but only seriously began playing with it in the mid-‘90s. Two years ago, a quick survey of apparel and textile industry trade shows revealed only a limited number of players, with a definite slant toward the needs of textile designers serving industries already heavily dominated by an engineering mindset. For example, textile designers creating fabrics for use in automobile interiors were more likely to use CAD; design offices in close proximity to automobile design centers in Detroit would be wired to manufacturing plants in the Southeast, literally controlling the weaving process at a distance.

The demands of a fast-paced fashion industry, coupled with a generation of young designers for whom the computer is not a strange, antithetical, un-artistic tool, have changed things. So, CAD has come to the clothing industry. It appears that the changes are just beginning.

CAD for textile and apparel design comes in several different forms. Both bitmap and vector models play in the field; solutions based on off-the-shelf programs work side-by-side with highly specialized programs.

A very good example of dedicated vector-drawing software for the apparel industry comes from Karat Software in Montréal. [Karat has a whole line of software tailored (!) to the needs of apparel designers; it covers the entire spectrum of the business from concept to fulfillment.]

KaratCAD Designer is real CAD; the company builds on top of the AutoCAD OEM engine. This is smart for a lot of reasons: AutoCAD is very well designed software with a long history of getting the issues solved correctly. AutoCAD – and therefore, KaratCAD – runs on relatively low-cost AMD- and Intel-based systems (I tested the Karat demo on an AMD Athlon system and it zipped right along). Despite the custom elements, KaratCAD feels like AutoCAD and runs like AutoCAD; consequently, lots of training and support options exist – not all of them KaratCAD-specific. The most important move in that direction: New York’s Fashion Institute of Technology, a preeminent school for various aspects of the apparel industry, now offers KaratCAD instruction as part of its routine.

The basics of the KaratCAD design process are easily comprehended and mastered. Starting with a mannequin, sketch the basic lines of one side of a garment – shoulder and sleeve, down through the torso and so on. Add neckline and perhaps lines indicating fitting (e. g., the “princess line” in a woman’s dress). Duplicate the resulting half-garment and join the lines; hide the mannequin. Details such as a knit neckband or cuff can be shown using standard CAD tools for inserting a repetitive, slightly varied pattern of lines – a standard CAD feature. Stitching and seam details, buttons and the like, are added with specialized apparel-design tools, accessed the the same way.

Following the KaratCAD demo, and drawing on my own – somewhat out-of-date – familiarity with AutoCAD, I was able to do simple things in a very short period of time.

Much the same story applies to Micrografx Designer. Designer has always had a strong CAD heritage underlying its illustration orientation. The company has had a somewhat rough time over the last few years; recently, Micrografx has been playing its strong cards with tailored applications of its powerful graphics software to particular industry needs. The result is an off-the-shelf price for what amounts to industry-customized.

As with KaratCAD, availability of training is central. In Micrografx’s home territory, this means former employee Mary Jo Pilcher. She lives around Dallas, and J. C. Penney can easily have her come in to train new designers at the company’s headquarters.

Pilcher’s background at Micrografx means not only thorough familiarity with the software but a wide range of training experiences. Again, it turns out the general-purpose of the software is a strength. Pilcher indicates that the addition of a relatively small number of specialized lines to represent seams and other apparel-specific elements makes Micrografx Designer an effective tool.

These vector-drawing programs show their advantage when it comes to producing finished drawings, obviously. One can easily see how a layered drawing could produce detailed drawing sets leaving no ambiguity about what was required in manufacturing. Equally, vector-based CAD software is ideally suited to driving large-format printer/plotters, including those which can be used to actually cut, and other early-phase prototyping and manufacturing processes.

Bitmap programs seem to play most strongly in the textile-design end of the fashion industry. Haberdash, now from Machanix, is the “venerable player” and effectively works as an enhancement to Adobe PhotoShop. Colour-Matters, originating in the United Kingdom and marketed from New York, is a complete stand-alone product specifically aimed at the apparel end of textile design.

Haberdasher’s claim to special strength is its relative familiarity to design-school graduates: The company line is, “Why train designers in software use, when they already know PhotoShop?” It is not a bad approach, Installing PhotoShop up and running with the Haberdasher tools installed in jig-time; I was able to create a fairly straight-forward design for a printed material in minutes.

Haberdash’s add-on approach minimizes the cost of the product; list is US$699 complete (which does not, apparently, include the cost of PhotoShop).

Colour-Matters makes a case that familiarity is not as valuable as specifically tasked software. The company’s product consists of a platform product, to which modules are added for specific kinds of textile design. The add-ons customize the software for woven materials, knits and mapping the resulting material to a photograph.

Answering Haberdash’s claim to short-learning-curve, Colour-Matters advances customer claims to ease-of-use and quick productivity. Colour-Matters suffers against the PhotoShop-plugin, however, when it comes to price; the platform software costs US$4,000., while the modules run from US$3,000. each.

Hardware requirement for each of these programs appears to be a bit more involved than might be expected for vector-drawing programs. Colour-Matters recommends a hefty Pentium III class machine with highest-end graphics display hardware and at least 256mb RAM.

A Colour-Matters peculiarity: The company’s recommended-hardware specification calls for disabling serial ports in the BIOS, and never using them (their emphasis). I did not have time to pursue this rather strange and non seq. specification; one suspecte either an untraced bug or the artifact of a hardware-control no longer used; in such a case, turning on serial ports, with their very specific IRQs and memory addresses could interfere with the software. One immediately suspects pre-Win32 (Windows 95/98/ME and NT/2000) programming code and software design strategies; the Windows memory model should lock out this kind of programming issue. This could lead to serious stability problems, especially in current Windows systems, which are decreasingly tolerant of sloppy DOS-leftover and 16-bit Windows code.

One last program drew attention: A Korean product, Solias Studio, is interesting because of its history. This is bitmap software; it is well suited for textile design – not surprising, given the strength of Korea’s textile manufacturing industry. At the same time, it is well-tuned to sketching and design, and transition from such sketches to working drawings and client-presentation boards.

The history is simply stated: The company had software, but it wasn’t going anywhere. In comes a Rhode Island School of Design graduate with a talent for programming. He takes the programming team off for six months and gives them a short course in design. The programmers come back to their workstations primed with a genuine understanding of what designers do, and build that into the new product. The result is impressive. Best of all, that designer is also the guy you are most likely to talk to at this company’s North American offices. He and his ilk do the training. This means a short learning curve and the best kind of support in the product-use department.

The price is not bad, either: The entire package is US$1,000; according to the company’s website, additional licenses (after the first) cost US$150.; this is both effective and a nice deterrent to the make-lots-of-copies piracy problem.

The Solias downside: The company is so nervous about intellectual-property issues, it has locked the software up with a password-protection scheme that could easily turn a user off. I tend to distrust such schemes; they often break the software somewhere. I also like the inconvenience, in this case: Get the software, then call in to unlock it, which is custom-generated, based on the serial number (also custom-generated) and time of the request. I waited too long to plug in my data and couldn’t unlock my software. The surf-over-for-a-new-one feature, which should have generated and e-mailed me a new password, didn’t work. In short, when you have this installed, don’t fail in your back-ups and don’t try to change to a new machine.

The second problem: Solias Studio uses proprietary datafile formats. Though the company provides vector-like drawing capabilities, it is a bitmap program and rather determinedly sticks to that. It imports and exports all the usual bitmap datafiles, but cannot exchange data with vector-based software. That, however, is largely inherent in the whole vector/bitmap difference, and not a great issue, finally.

Return to Top of Page

Athlon — as in “Decathlon”
AMD ain’t Number-2 Anymore (Maybe)

Advanced Micro Devices’ (AMD) Athlon processor is fast. AMD has coupled this next-generation processor with support for an advanced architecture aimed at reducing the system-to-CPU bottleneck. The result is a system that jumps ahead of Intel-inside boxes at a price that makes owning the latest-and-greatest attractive.

Most people know AMD as the purveyor of processors for lower-cost systems. The company has been in the x86 business for a long time, having brought to market superior implementations of the 486 and Pentium architectures. The company’s K6, K6-2 and K6-III processors have tracked Intel’s Pentium-branded offerings pretty closely.

The Athlon is a deliberate move to jump ahead of Intel. The company’s description of the processor and support architectures makes clear the extent to which it has adopted the design elements hitherto associated with the processors used in high-end proprietary workstations. To get the best possible performance, AMD licensed the Digital Equipment Corp. (DEC) Alpha EV6 system-bus technology. This bus was designed to couple DEC’s screaming Alpha processors to a system; AMD’s initial implementation, at twice the speed of the bus used by Intel’s Pentium III processors, means that the fast Athlon processor is not constrained by narrow-bandwitdth connection to the rest of the system.

It appears the move has succeeded to some degree; the Intel-Inside line seems less pervasive than it was. The engineering sample AMD supplied to CitiGraphics has proven extremely stable, and extremely able.

Aside from the Athlon-specific elements, all the parts were standard off-the-shelf products – the kind of parts that are commonly found in both brand-name and quality white-box microcomputers. One important element: The system sports three good-size muffin fans; there is a lot of heat to dissipate. The fans did the job well; the system in its standard minitower case never seemed to run hot even after six or eight hours tucked in to the knee-hole under the test-bench.

The 650MHz system we've been using is a screamer. We expect new systems, based on a more tightly integrated processor

For design professionals, the key elements are straight math processing – integer math and floating point calculations. Strong performance in these areas translates directly into faster transitions in walk-throughs, faster rendering of design elements, faster transition to photo-realism, faster results in simulations and other modeling chores, and so on. The standard benchmark software for assessing performance in these two crucial areas comes from Standard Performance Evaluation Corp, the SPECfp and SPECint programs.

Normally, one expects to see a difference in performance only if these numbers pass a noticeable-difference threshold of about 7 percent. The 600MHz Athlon showed about 18 percent improvement over a 550MHz Pentium III on SPECint and a rather dramatic 51 percent improvement on SPECfp, using otherwise comparable systems. The slightly greater processor speed does not appear enough to account for these differences; the significant elements are the improved processor and system-integration architectures.

Looking at math-intensive applications demonstrated the case even more effectively.

First, I played with a sample AutoCAD drawing supplied by AMD: Running the current version of AutoCAD under WinNT, this drawing consisted of three toruses and a cube with the AMD logo. The fairly complex geometries in this image rendered quickly in real time. More important, a simple animation, swinging the donuts around the cube in various patterns, moved smoothly through its evolutions without a hitch.

Athlon Performance

The chart shows math performance in Windows for both 550MHz and 600MHz Athlon processors, compared with a 550MHz Pentium III. Note also noticeably better OpenGL rendering in an AutoCAD environment.


For a more “real world” test, I used ViaGrafix’s new ViaDraw software. This is a very simple illustration program which sports a quick-3D capability I like, including the ability to execute radial sweeps. I drew a nasty complex profile, with lots of points, and swept the image into a 3D solid.

{short description of image}

Drawing and rendering times were about 10 to 15 percent faster than on my production Pentium II Intergraph system. But when I started playing with the image – rotating it, moving it around, moving it from one environment to another – things that stretched the capacity of last year’s hot system, waiting times dropped. Effectively, this system eliminated waiting time altogether. Things that might take half a minute on my production system, took less than ten seconds.

Along the way, I noticed that I spent a lot less time waiting for software to load. Complex drawing software involves loading not just one program, but a series of support libraries, all of which are integrated more or less on the fly to create the new environment. A number of elements play in this: system memory, hard disk access speeds and processor speeds all work together. I take the faster program initialization as a further indicator of just how carefully AMD has thought through the integration issues, getting beyond the merely-hot-processor mindset that tends to be the centerpiece of new systems announcements.

My last formal test used 3Scan from Geometrix. 3Scan takes photographic images and generated 3D models from them. Using a canned image of TinkyWinky (sans purse…), I created a model which I then rotated and spun anound and so on, all in real time.

{short description of image}

TinkyWinky’s photo, scanned and rendered as a 3D model, rotating and spinning in real time.

Generally, 3Scan processes ran up to 15 percent faster in the Athlon environment.

In addition, I lived with the system. I used it for ordinary day-to-day computing. I found the system rock-solid for all the usual chores one does on the machine. Internet access, performance in a networked environment, running Office 2000 – all that kind of thing – ran without a hitch, generally faster. This machine delivered the snap I associate with a technical workstation – say, an Alpha-based system or an SGI workstation.

The conclusions are interesting: First, I have revised my acceptance of the Intel-inside line. AMD has long since proved capable of delivering a quality x86 processor at better-than-Intel prices; the company has even compelled Intel to compete in that arena. Even with its success, Intel-based systems usually sell at a premium over AMD-based systems; it has not been clear that the premium price represented any greater value.

Second, AMD has demonstrated its ability to deliver a forward-thinking design, retaining compatibility, but delivering noticeably better performance in an environment that involves no compromises.

The bottom-line appears to be that AMD Athlon systems represent an opportunity to get next year’s hot box, offering workstation performance, now, at an attractive price.

Return to Top of Page