At the Forge - Database Modeling with Django
Now that our data model is in place, let's see how we can work with it. Given that our model is brand new, and that there is no data currently stored in it, let's begin by adding some data to it.
In last month's column, we saw how each URL request in Django results in the invocation of a method. Which method is invoked depends on the settings of urls.py, a site-wide configuration file that tells Django what application and method should be associated with what URL.
One way to add data to our blog database, and to get some practice working with the various components of Django, is to do so via a view and template. Normally, I would demonstrate how to do this with an HTML form, but for space reasons, I use a simpler (and more contrived) way, inserting dummy data into the database.
The first step is to add a new line to the definition of the urlpatterns variable, defined in urls.py:
Now, we can go to the URL /blog/add_dummy_data, and Django will invoke the blog.add_dummy_data method. The beginning of this method is quite simple, namely:
The name of the method is obvious from the configuration file. The number of parameters is determined by the number of parenthesized groups in urlpatterns.
Now what do we do? If we were dealing with raw SQL, I would suggest the following:
INSERT INTO Posting (title, body, posted_at) VALUES ('Dummy 1 headline', 'This is my first blog post', NOW - interval '1 hour'); INSERT INTO Posting (title, body, posted_at) VALUES ('Dummy 2 headline', 'This is my second blog post', NOW());
These will insert two rows into the Posting file: the first with a timestamp from one hour ago and the second with a current timestamp.
But, we don't want to use SQL. We want to use Python, creating objects that automatically map to these INSERT statements. So, it makes sense that all we have to do is create new instances of the Posting object, passing it appropriate parameters. And, sure enough, we can do that:
p = Posting(title='Dummy 1 headline', body='This is my first blog post', posted_at=(datetime.now() - timedelta(0, 0, 0, 0, 1))) p.save() p = Posting(title='Dummy 2 headline', body='This is my second blog post', posted_at=datetime.now()) p.save()
If you are an experienced Python programmer, the above code shouldn't be very surprising at all. We simply are creating two new instances of Posting, passing arguments that will set the object's attributes. Then, we invoke the save() method on each posting, which presumably saves the posting to disk.
Finally, we finish our method with:
return HttpResponse("Created blog posts.")
With the method (shown in Listing 1) defined, start up the server:
python manage.py runserver 220.127.116.11:8000
Then, point the Web browser to the URL defined in urls.py, and get the message:
Created blog posts.
Next, check the database, just to be sure:
atf=# \x Expanded display is on. atf=# select * from blog_posting; -[ RECORD 1 ]----+------------------------------ id | 1 title | Dummy 1 headline body | This is my first blog post publication_date | 2007-06-15 16:13:34.609396-05 -[ RECORD 2 ]----+------------------------------ id | 2 title | Dummy 2 headline body | This is my second blog post publication_date | 2007-06-15 16:14:34.675235-05
As you can see, we were able to create these new objects successfully and store them in the database.
Listing 1. models.py for Creating New Dummy Posts
from django.template import Context, loader from django.http import HttpResponse from blog.models import Posting from datetime import * def add_dummy_data(request): p = Posting(title='Dummy 1 headline', body='This is my first blog post', publication_date=(datetime.now() - timedelta(0, 0, 0, 0,1))) p.save() p = Posting(title='Dummy 2 headline', body='This is my second blog post', publication_date=datetime.now()) p.save() return HttpResponse("Created blog posts.")
Now that we've created these objects, let's see if we can retrieve and display them—a pretty typical thing to do if you write applications in Django. Because the most common thing you might want to do with a blog is display all of the postings in reverse chronological order, we write our index method to do that. If you still don't have an entry in urls.py for index, make sure there is a line that looks like the following in the definition of urlpatterns:
Now, we open up views.py to create our index method. The first task in that method is to get all the postings. Django makes that trivially easy to do:
postings = Posting.objects.all()
This retrieves all the instances of Posting (which happen to be stored as rows in our database) and assigns them to the variable postings. This variable isn't a list, but an instance of a QuerySet object. We most likely will want to iterate over the QuerySet, but we can perform other operations on it, such as reordering it or retrieving selected elements.
We also can select particular items from the database. This is done with two methods: one called filter (which returns objects that match a restrictive function) and one called except (which does the opposite, returning objects that are false for a function). Both filter and except take a large number of parameters, built up dynamically by joining column names with various functions. The column name and function name are joined with a double underscore (__).
For example, we can get only those postings from this year:
this_year_postings = Posting.objects.filter( publication_date__gte=datetime(2007, 1, 1))
Sure enough, this returns both of our postings. Because filter and except return QuerySet objects, we can chain them together, creating just the query we want in Python code.
But, what if we want only the most recent posting? If you're thinking there will be a “limit” feature, you've been working at the SQL level (or in Rails) for too long. Because QuerySets use lazy evaluation, you simply can say:
this_year_postings = Posting.objects.filter( publication_date__gte=datetime(2007, 1, 1))
We similarly can order our objects by using the order_by method on them, which can be chained along with filter and exclude:
latest_posting = Posting.objects.filter( publication_date__gte=datetime(2007, 1, 1)).order_by('-publication_date')
Notice that we put a minus sign (-) before the word publication_date. This tells Django we want to order the results in reverse.
Django has a wealth of such methods, giving both a great deal of flexibility in constructing your queries and a rich Python API that allows you to ignore the low-level SQL calls almost entirely.
Finally, we can get information out of our object as we would retrieve it from any Python object:
output += "<h1>%s</h1>\n" % posting.title output += "<h2>%s</h2>\n" % posting.publication_date.isoformat() output += "<p>%s</p>\n\n\n" % posting.body
If we put this all together, as shown in Listing 2, we'll have a view method (albeit without a proper Django template) that shows each of the blog postings.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SourceClear Open
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Doing for User Space What We Did for Kernel Space
- Non-Linux FOSS: Caffeine!
- Parsing an RSS News Feed with a Bash Script
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide