Xilinx FPGA Design Tools for Linux
Now that we've selected HDL, the next step is to let the tool know with which kind of FPGA we're going to design. For illustration, we use the most advanced FPGA devices, the Virtex-II Pro family. This is shown in Figure 4, where the xc2vp7 device is selected, along with an fg456 package. This device is one of the smaller Virtex-II Pro parts, but it still has numerous resources. Here, we use only a small part of the device.
The menu window in Figure 4 also allows other choices. You can select the part speed grade, and here we accept the default of -6. The Project Navigator can be used to organize your entire project flow. For instance, you can perform both behavioral simulation and functional simulation. The first type of simulation is a check that you have a logically correct design—that the design does what it's supposed to do. The second type of simulation is post-FPGA implementation, used for design verification of a completed chip. The device selection screen in Figure 4 also includes other options, such as the particular simulator you want to use for HDL simulation and what language to use for simulating an implemented FPGA. This might include, respectively, some industry-standard tools provided by Xilinx partners, or another simulator, and Verilog or VHDL.
Next, we select a new source file for the design and give the design a filename. In this process, we tell the tool what kind of CAD document we're creating; in this case, it's a Verilog module. We enter a filename of mpy16.v, with the .v being the standard filename suffix for Verilog. It is customary, but not required to make the filename for the top-level module the same as the module, or a name like toplevel.v.
Several other kinds of documents can be entered for the tool and added to the project. We don't have time to examine all of these capabilities, which include alternative entry modes (schematics) and the inclusion of standard and custom HDL libraries made by the user.
To define this (first) Verilog source for the design, the Design Manager offers some help. For the top-level module mpy16, we fill out a module port table using a tabular entry tool (Figure 5). Here, we define the wires that enter and exit the top-level module, and these will end up as the external I/O pins on the FPGA. The names of the ports entered are p, x, y and clk.
We specify p as a 32-bit-wide output and the main inputs, x and y, as 16-bits wide. Because this multiplier will be pipelined, we also include a port named clk, which will provide the synchronous timing source for the multiplier. The port clk is only a single wire, or net, so we leave out content for MSB or LSB in the table. This means clk will be a scalar. In Verilog, vectors are groups of wires or nets, and these are zero-base indexed.
After completing tabular entry for the top-level module, we obtain a summary dialog. Then with the project set up, the Project Navigator brings up all the tools and the initial outline for our Verilog module. This is shown in Figure 6, with the skeleton source code in the upper-right corner. An editor is supplied with ISE 6.1i, and you also can import HDL source code created with the Linux editor of your choice.
Listing 1. The Verilog Source Code for a 16-Bit Pipelined Multiplier
module mpy16(p,x,y,clk); output [31:0] p; input [15:0] x; input [15:0] y; input clk; // inferable storage via synthesis reg [31:0] p; reg [15:0] xq; reg [15:0] yq; // 16x16 unsigned multiplier specified // behaviorally always @(posedge clk) begin xq <= x; yq <= y; p <= xq * yq; end endmodule // mpy16
Listing 1 is the Verilog source code for a 16-bit pipelined multiplier. This code is done in a behavioral style, and we're going to allow Xilinx Synthesis Technology (XST) to figure how to implement what we mean by the code. Today, synthesis is very powerful, and we simply can infer the multiplier hardware, without having to specify its logic design in detail.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide