Using Linux at Lectra-Systèmes
Lectra Systèmes is one of two world leaders in the design and creation of the CAM solution, CAD/CAM and cutting machines, mainly for the footwear and apparel industry. The headquarters of this company are in Cestas, in the suburbs of Bordeaux, France. Five hundred people work here, 150 of whom are in the Research and Development department.
I am in charge of systems development in the R&D Department. The system group does all developments that concern base systems (e.g.,installation procedures, graphic libraries, tools).
Since the 1980s, Lectra has developed its own computers based on Motorola 680x0 processors. The main part of the installed systems (approximately 3000 customers, 80% abroad) uses a mono-task, proprietary operating system, written in 680x0 called MILOS for “Micro Lectra Operating System”.
A few years ago, Lectra started to become interested in database systems requiring the use of a more powerful system that would be multi-task and multi-user. After some teething problems with the Unix-like, the choice turned to implementing Unix System V3.2 for 680x0 architecture. The small team of which I am a member has managed to port the UniSoft sources as well as the X Window System graphic environment.
Lectra then decided to develop a new line of computers based on 68040 processors, much more powerful than the 68030. The operating system used was the Unix USL SVR4.0 version, and another port was made.
Although this task proved to be very interesting, we were persuaded that this computer (named OpenCad) would be the last one designed from scratch by the R&D teams. A few people continued to show interest, but continuing to support a series of computers that were too small to be competitive made it difficult to remain in a hardware market that is a race against power and low prices.
Despite OpenCad's commercial success with our customers, Lectra's management quite rightly decided to launch the development of a completely new range of products utilizing mainly Intel 486 and Pentium architecture, still with a Unix environment and X Window System. The database applications which use many resources would, on the other hand, be targeted to SUN SPARC architecture.
After some comparative tests between the different versions of Unix on the PC, it was decided to use Linux, which proved to be sturdy, have high performance, and the right price. Also, having the sources of the system available proved to be advantageous, as we use many special peripherals for which the adaptation would be much more difficult on a Unix machine.
Having chosen the system, we now needed to adapt Linux to an industrial solution. It is quite clear that Unix (and, therefore, Linux) is slightly more difficult for a final operator to use. This adaptation must be done in two stages:
at the installation procedure of the final product, as it is not possible to expect a technician (a customer) to know how to install Slackware
at the user interface, so that the administration of the station base (network, users, access rights) and the specific functionalities of Lectra are easily accessible by someone who is not necessarily a computer scientist
The Lectra distribution uses the same principles as other distributions—two boot floppies and a CD-ROM. The installation screens use dialog-0.3, which has proved to be extremely simple and powerful when it comes to creating a series of installation screens. The main Lectra Linux installation window can be seen in Figure 1.
The main advantage when choosing Linux in this domain is that it has the possibility of creating an extremely precise installation procedure (i.e., only what is required is installed), and it is therefore very quick. The current Lectra Desktop version takes less than 10 minutes to install on a Pentium 120. In comparison, the same desktop version on a Solaris system takes nearly an hour, as it is necessary to install the Solaris CD first, followed by Solaris patches, and then the Lectra Desktop.
The different packages are managed as ISO-9660 files (with Rock Ridge extensions) from a Linux structure using the mkisofs program. The ISO images are then written on the master CD using a PC under Microsoft Windows.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- Tech Tip: Really Simple HTTP Server with Python
- Validate an E-Mail Address with PHP, the Right Way
- Android's Limits
- Reply to comment | Linux Journal
25 min 38 sec ago
- Welcome to 1998
1 hour 14 min ago
- notifier shortcomings
1 hour 37 min ago
3 hours 14 min ago
- Android User
3 hours 16 min ago
- Reply to comment | Linux Journal
5 hours 9 min ago
7 hours 58 min ago
- This is a good post. This
13 hours 11 min ago
- Great, This is really amazing
13 hours 13 min ago
- These posts are really good
13 hours 15 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?