The Pari Package On Linux
In addition to the standard mathematical operations +, -, *, and /, you find transcendental and number theoretical functions, functions dealing with elliptic curves, number fields, polynomials, power series, linear algebra, sums, and products, as well as functions for plotting.
For example, you can factor numbers and polynomials:
? factor(249458089531) %9 = [7 2] [48611 1] [104729 1]
meaning 249458089531=72*48611*104729, or
? factor(t^3+t^2-2*t-2) %10 = [t + 1 1] [t^2 - 2 1]
meaning t3+t2-2*t-2=(t+1)*(t2-2), where t2-2 cannot be factored further using rational coefficients. It is only possible to factor polynomials in one indeterminate.
To solve a linear equation x=3*y, y=2*x-1 (using the gauss method), you rewrite it as x-3*y=0, -2*x+y=-1, take the coefficient matrix A, the right side b and compute
? A=[1,-3;-2,1] %11 = [1 -3] [-2 1] ? b=[0;-1] %12 =  [-1] ? gauss(A,b) %13 = [3/5] [1/5]
giving you the result x=3/5, y=1/5.
To determine the roots of a polynomial you may just enter roots:
? \precision=4 precision = 4 significant digits :? roots(t^3+t^2-2*t-2) %14 = [-1.414 + 0.0000*i, -1.000 + 0.0000*i, 1.414 + 0.0000*i]~
Plotting gives you a quick overview of a function even in text-mode; see Figure 1. For plotting to a separate X11-window, enter:
Instead, to get the graph in Figure 2, enter:
The gp commands may be classified into expressions (which are evaluated immediately), function definitions, meta-commands, and help. Via the ? key, you obtain help for the meta-commands controlling gp as well as for each of the built-in functions. The meta-commands allow you to control the way of printing pari results as well as reading and writing from or to a file. \w <filename> saves your complete session (from starting gp up to issuing this command) to a file, \r <filename> does the reverse job, reading the session, bringing you to (or returning you to) the exact state that you previously saved. Other useful features include the writing of expressions in TeX/LaTeX format (via texprint) and switching the printing of timing information by the # command. You may also of course run gp as a batch job using standard I/O redirection. You span input over several lines by using the \ continuation character.
Defining your own functions in gp is quite simple. As an example, cube returns the third power of its argument:
? cube(x)=x*x*x ? cube(3) %15 = 27 ? cube(t+1) %16 = t^3 + 3*t^2 + 3*t + 1
You can use control structures as if, while, until, for (there are some special variants), goto and label as well as functions for printing or clearing variables. Though pari already provides a function fibo, let us try to program a function for the Fibonacci sequence. This sequence is defined by f0=1, f1=1, fn=fn-1+fn-2 for n>=2, yielding f2=1+1=2, f3=2+1=3, f4=5,... The (probably) shortest such function uses recursion. Here you need the if expression to test for the special cases f0=1 and f1=1. if(a,seq1,seq2) evaluates seq1 if a is nonzero and seq2 otherwise:
?fib(n)=if(n==0,1,\ if(n==1,1,fib(n-1)+fib(n-2))) ? fib(5) %17 = 8
For small n this is okay. A faster way is to compute the Fibonacci numbers by iteration. In each step the new value h=fn is computed as the sum of the last two values g=fn-1 and f=fn-2, and afterwards these values are exchanged. For this you need variables f, g, h, and m (counter). To avoid conflicts with variables defined outside the function, these four are declared as local by writing them at the end of the parameter list. The for(x=a,b,seq) expression evaluates seq for each value of x running from a to b. Expressions separated by a semicolon ; form a sequence, and a sequence's value is always that of its last expression:
? fib2(n, m,f,g,h)= f=1; g=1; \ for(m=2, n, h=f+g; f=g; g=h); g ? fib2(5) %18 = 8
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Tech Tip: Really Simple HTTP Server with Python
- Kernel Problem
1 hour 41 min ago
- BASH script to log IPs on public web server
6 hours 8 min ago
9 hours 44 min ago
- Reply to comment | Linux Journal
10 hours 17 min ago
- All the articles you talked
12 hours 40 min ago
- All the articles you talked
12 hours 43 min ago
- All the articles you talked
12 hours 45 min ago
17 hours 9 min ago
- Keeping track of IP address
19 hours 53 sec ago
- Roll your own dynamic dns
1 day 14 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?