At the Forge - Fixtures and Factories
One of the points of pride in the Ruby community is the degree to which developers are focused on testing. As I wrote last month, tests in a dynamic language have more potential to correct more errors and keep your code trim and functional than even the best compliers. Rails developers are used to working with three different types of tests: unit (for database models), functional (for controller classes) and integration (for testing things from a user's perspective). Combined with coverage and analysis tools, such as the metric_fu gem I described last month, these tests can help ensure that your code is as solid as possible before it is seen by the general public.
Testing your code requires that you provide it with inputs and that you then match those inputs with expected outputs. When it comes to a Web application, those inputs most likely will come from either a relational database or from a user's form submission. Testing form submissions is not particularly difficult, especially in a framework such as Rails, which has extensive testing support built in. Testing data that comes from a database, however, can be a bit more challenging, because it means that you must somehow store the data in the database so that the tests can access it.
One possible solution, of course, is to pre-populate the database tables with test data directly. But, as simple and obvious as that solution might appear at first glance, it assumes that you have a source from which you can pre-populate the database. You could do it by hand, but then you'll find that any modifications your program makes to the database—creating, updating and deleting rows—either will stay in effect for the next test or will need to be reloaded from scratch from another source.
In other words, you need a way to put the test database into a known state before you begin your tests. If you know this beginning state, you can write tests that check subsequent states.
The question is, how do you create that initial state? From the time that Rails was first released, the answer was fixtures—text files containing YAML-formatted hand-crafted data. Fixtures are nice, but as a number of Rails developers have written over the years, they can be hard to write, hard to keep track of and generally brittle.
This month, I take a look at the current state of loading data into a test database. I start by examining fixtures, exploring some ways you still might be able to make them useful inside your tests. Then, I cover a newer approach to test data, known as factories, looking at the Factory Girl gem and then taking a quick peek at the Machinist gem, both of which are in widespread use among Rails developers and might be a better fit than plain-old fixtures for your project.
Fixtures, as I mentioned above, are YAML files containing data that can be loaded into a database. Rails actually allows you to put your fixture data in formats other than YAML, such as CSV. However, my guess is that CSV is mostly unused, and that YAML is the format used by almost everyone working with fixtures.
I created a simple Rails application (using SQLite) on my computer with:
rails --database=sqlite3 appointments
Then, I generated a RESTful resource for people:
./script/generate scaffold person \ first_name:string last_name:string email:string
This not only created a model for working with people, but also a controller for handling the basic RESTful functions, views for all of those controller actions, a database migration that uses Ruby to describe my model and even some rudimentary tests. I can import the database migrations with:
And, voilà! I now have a working application that allows me to add, delete, modify and list a bunch of people. You might have noticed that I named my Rails application appointments. My plan is to create a very simple appointment calendar, so that I can keep track of with whom I'll be meeting. So, I create another resource, named meetings:
./script/generate scaffold meeting \ starting_at:timestamp ending_at:timestamp location:text
(It should go without saying that if I were creating this for real, I would not store the location as a text field, but rather as an ID pointing to another table of locations. Keeping data in such normalized form, so that the text appears in a single place and is referred to from elsewhere in the database using foreign keys, makes the application more robust, as well as more efficient.)
Finally, I create a third table, meeting_person, which allows one or more people to have a meeting. If I were willing to restrict appointments to a single participant (or two participants, if I include the person using this software), I simply could have a person_id field in the meeting table. To get this, I create a new model:
./script/generate model meeting_person \ person_id:integer meeting_id:integer
Now that the three models are in place, I can add associations—those declarations in the model classes that link them to one another. While I'm editing the model, I also will add some validations, which ensure that the data fits my standards. The final version of the models is shown in Listing 1. Perhaps the only particularly interesting part of the models is the custom validation that I placed in the Meeting model:
def validate if starting_at > ending_at errors.add_to_base("Starting time is later than ending time!") end end
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide