MyHDL: a Python-Based Hardware Description Language
The SPI slave module was modeled at a level that stays close to an actual implementation. This is a good way to introduce MyHDL's concepts. However, using MyHDL for this purpose doesn't provide a lot of advantages over traditional HDLs. Instead, MyHDL's real value is it makes the whole of Python available to hardware designers. Python's expressive power, flexibility and extensive library offer possibilities beyond the scope of traditional HDLs.
One area in which Python-like features are desirable is verification. As with software, in hardware design, verification is the hard part. It generally is acknowledged that traditional HDLs are not up to the task. Consequently, yet another language type has emerged, the hardware verification language (HVL). Once again, MyHDL relies on Python to challenge this trend.
To set up a hardware verification environment, we first create a test bench. This is a hardware module that instantiates the design under test (DUT), together with data generators and checkers. Listing 2 shows a test bench for the SPI slave module. It instantiates the SPI slave module together with an SPI tester module that controls all interface pins. To be able to use multiple SPI tester modules that verify various aspects of the design, the SPI tester module is a parameter of the test bench.
Listing 2. A Test Bench for the SPI Slave Module
import unittest from random import randrange from myhdl import Signal, intbv, traceSignals from SPISlave import SPISlave, ACTIVE_n, INACTIVE_n def TestBench(SPITester, n): miso = Signal(bool(0)) mosi = Signal(bool(0)) sclk = Signal(bool(0)) ss_n = Signal(INACTIVE_n) txrdy = Signal(bool(0)) rxrdy = Signal(bool(0)) rst_n = Signal(INACTIVE_n) txdata = Signal(intbv(0)[n:]) rxdata = Signal(intbv(0)[n:]) SPISlave_inst = traceSignals(SPISlave, miso, mosi, sclk, ss_n, txdata, txrdy, rxdata, rxrdy, rst_n, n=n) SPITester_inst = SPITester( miso, mosi, sclk, ss_n, txdata, txrdy, rxdata, rxrdy, rst_n, n=n) return SPISlave_inst, SPITester_inst
For the tests themselves, we use a unit testing framework. Unit testing is a cornerstone of extreme programming (XP), a modern software development methodology that is an intriguing mixture of common sense and radically new ideas. The genuine XP approach is to develop the test first, before the implementation. XP is a useful methodology, but its lessons virtually are ignored by the hardware design community. With MyHDL, Python's unit testing framework, unittest, can be used for test-driven hardware development.
Listing 3 shows test code for the SPI slave module. Tests are defined in a subclass of the unittest.TestCase class. Each method name with the prefix test corresponds to an actual test, but other methods can be written to support the tests. A typical test suite consists of multiple tests and test cases, but we describe a single test to demonstrate the idea.
Listing 3. A Test Case for Receiving Data via SPI
import unittest from random import randrange from myhdl import Simulation, join, delay, \ intbv, downrange from SPISlave import SPISlave, ACTIVE_n, INACTIVE_n from SPISlaveTestBench import TestBench n = 8 NR_TESTS = 100 class TestSPISlave(unittest.TestCase): def RXTester(self, miso, mosi, sclk, ss_n, txdata, txrdy, rxdata, rxrdy, rst_n, n): def stimulus(data): yield delay(50) ss_n.next = ACTIVE_n yield delay(10) for i in downrange(n): sclk.next = 1 mosi.next = data[i] yield delay(10) sclk.next = 0 yield delay(10) ss_n.next = INACTIVE_n def check(data): yield rxrdy self.assertEqual(rxdata, data) for i in range(NR_TESTS): data = intbv(randrange(2**n)) yield join(stimulus(data), check(data)) def testRX(self): """ Test RX path of SPI Slave """ sim = Simulation(TestBench(self.RXTester, n)) sim.run(quiet=1) if __name__ == '__main__': unittest.main()
The RXTester method is a generator function designed for a basic test of the SPI slave receive path. It contains a local generator function, stimulus, that transmits a data word on the SPI bus as a master. Another local generator function, check, checks whether the data word is received correctly by the slave. The complete test consists of a number of random data word transfers. For each data word, we create a stimulus and a check generator. To wait for their completion, MyHDL allows us to put them in a yield statement. For proper synchronization, we want to continue only when both generators have completed. This functionality is accomplished by the join function.
When we run the test program, the output indicates which tests fail at what point. When everything works, the output from our small example is as follows:
$ python test_SPISlave.py -v Test RX path of SPI Slave ... ok ------------------------------------------------ Ran 1 test in 0.559s
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
12 min 46 sec ago
- good point!
15 min 37 sec ago
- Varnish works!
24 min 44 sec ago
- Reply to comment | Linux Journal
54 min 21 sec ago
- Reply to comment | Linux Journal
3 hours 20 min ago
- Reply to comment | Linux Journal
7 hours 20 min ago
- Yeah, user namespaces are
8 hours 36 min ago
- Cari Uang
12 hours 7 min ago
- user namespaces
15 hours 1 min ago
15 hours 26 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?