Manufacturer: Open Source Contributors
Price: Free Download
Author: Petr Sorfa
The one development tool that has been lacking in the Open Source community is a professional-level IDE (integrated development environment). KDevelop thankfully provides such a tool that combines the resources of contributors and existing open-source products. However, does KDevelop match the expectations of a commercial IDE usually based on a non-UNIX platform?
An IDE is an environment, preferably graphical, that is used for the creation, debugging and maintenance of programs. The three core components of this environment are a programmer's editor that is context-sensitive to the programming language, a GUI (graphical user interface) builder that is used to construct the graphical front end of the application and a debugger to detect bugs in the code.
These are the basic requirements of an IDE. However, there really needs to be more than these three components to make an IDE a useful tool.
Because open-source programs tend to concentrate on completing the task, rather than being user friendly, installation sometimes tends to be difficult and frustrating, particularly considering all the different versions of Linux and the constantly changing libraries and tools.
The KDevelop RPM binary can be downloaded by either following the links off KDevelop's web site or by using a site such as http://www.rpmfind.net/ to locate it.
For this review, I installed a brand new Linux installation and made sure it included every single package and feature that the distribution allowed.
Alas, I ran into installation problems when I found certain dependencies for various libraries that did not exist in my Linux installation. A quick diversion to the Internet to download the missing libraries solved the problem.
Total installation time took about 30 minutes with a fast internet connection and a little bit of technical knowledge. This installation method is ideal for users with some Linux administration skills.
Sometimes, building from source is recommended for programmers that have non-Linux/UNIX operating systems, for customized Linux distributions and for potential KDevelop contributors. Only experienced or very determined developers should attempt building KDevelop from source code.
All the development versions of the required libraries must be installed. Because there is no easy way of determining these dependencies, building from source tends to be a process of trial and error.
A feature of KDevelop is its ability to use many existing open-source tools. Not all of these tools are required, but they are necessary to ensure that KDevelop performs as expected. When KDevelop is started for the very first time, a list of associated tools are given and are marked as either present or missing (see Figure 1). Once this list is available, the missing tools can be installed later.
Required tools utilized by KDevelop are g++2.7.2, g++2.8.1 or egcs 1.1 (I recommend g++2.9.2); make; perl 5.004; autoconf 2.12; automake 1.2; flex 2.5.4; gettext; Qt 2.2.X (which includes Qt designer and uic); and KDE 2.X.
Optional tools include enscript, Ghostview or KGhostview, Glimpse 4.0, htdig, sgmltools 1.0, KDE-SDK (KDE software development kit), KTranslator, KDbg, KIconedit and Qt Linguist. Although optional, it is best that all of these tools are available.
Although KDevelop provides the three core requirements of an IDE (editor, GUI builder and debugger—see Figure 2), it has several other features that make it a robust and reliable tool, suitable even for commercial projects.
A complex program can be daunting for both beginners and experts alike; so program documentation is critical. The documentation for KDevelop provides a good source of on-line help, although it does lack screenshots and visual content. Context-sensitive help is available through tool tips and the “What's this?” cursor mode.
KDevelop also indexes the KDE Lib and Qt documentation. The ability to set bookmarks is present, which makes it easy to return to relevant documentation. Other tutorials and documentation are also available at the KDevelop's web site.
KDevelop has a built-in HTML browser that makes documentation access effortless and removes the need for an external browser.
Here are the basic interface components: Tree View, which consists of a class, groups, file, books and watch views; Output View, which provides output for messages, stdout, stderr, debugger breakpoints, debugger frame stack, debugger disassembly and debugger messages; Editor and Documentation, which includes Header/Resources editor, C/C++ files editor and documentation browser; and Tool Bar, an iconic representation of the main menu options.
KDevelop's project creation process is one of the easiest to execute using the Application Wizard, which goes through the following steps:
Application Type (see Figure 3)--this step allows the user to select a template for creating a program using KDE 2 Mini; KDE 2 Normal; KDE 2 MDI GNOME (Normal); Qt (Normal, Qt 2.2 SDI, Qt 2.2 MDI, QextMDI); Terminal, i.e., text (C, C++); and others (custom).
Generate Settings (see Figure 4)--this is the step to enter the project name, location, initial version number, author's name and e-mail. There are also options to generate various project-associated files, such as sources, headers, GNU standard files, icons and project-associated documentation.
Version Control System (see Figure 5)--the version control system dialog allows you to set the parameters of the source control system. This is dependent on the Linux distribution. In general, this is the CVS tool.
Header Templates for header and code files (see Figure 6)--this allows the developer to select automatically generated headers for program headers and source files. These headers are fully customizable with tag expansions, which fill in various bits of information, such as the author, filename and date.
Project Creation (see Figure 7)--in the final stage of project creation, the related project's files and directories are created, using the automake and configure tools. Note that if some of the required tools are missing in the Linux distribution, this creation process might fail. If failure does occur, it is best to install the missing components and then recreate the project. It is extremely difficult to recover from a project-creation failure.
Once the project has been created, development can begin. I strongly suggest that at this point the project be built and executed to detect any build problems.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Reply to comment | Linux Journal
31 min 1 sec ago
- Yeah, user namespaces are
1 hour 47 min ago
- Cari Uang
5 hours 18 min ago
- user namespaces
8 hours 12 min ago
8 hours 37 min ago
- One advantage with VMs
11 hours 6 min ago
- about info
11 hours 39 min ago
11 hours 40 min ago
11 hours 41 min ago
11 hours 43 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?