As MySQL has about 50 mirrors over the world, and we don't get download statistics from them, it's hard to tell how many MySQL installations are out there.
The WWW and FTP log at http://www.mysql.com/ gives us the information shown in Tables 3 and 4; all counts are based on the number of distinct IPs.
On linux.com, every page does somewhere between 10 and 20 queries to the database. And linux.com does anywhere between 500K and 800K page views per day. They run MySQL on its own server, a dual Xeon system with huge amounts of RAM and hard-disk space.
While writing this, I asked Linux Journal what they use as a web back end, and learned they also use MySQL. Among the awards we have been given, we highly value the “Most Used Database” 1998 award we got from Linux Journal's readers.
Multi-threaded, multi-user and very fast
APIs to many different languages
A good, free ODBC driver
Many different column types which support all ANSI 92 and all ODBC 2.50 types as well as some new ones
Support for almost all ODBC 3.0 and SQL ANSI92 functions
Full support for SQL GROUP BY and ORDER BY clauses; support for group functions (COUNT, AVG, STD, SUM, MAX and MIN)
Ability to mix tables from different databases in the same query
Very flexible privilege system where privilege is based on host and user
Support for LEFT OUTER JOIN with both ANSI SQL and ODBC syntax
Fixed-length and variable-length records
Handles large databases; at TcX, we are using MySQL with some databases that contain over 50 million records.
Very robust with no memory leaks; all reported memory leaks have been in non-MySQL libraries, most notably some versions of glibc.
Ability to configure many different character sets, e.g., Japanese/Chinese
Error messages available in many languages
Many utilities and much contributed software
MySQL is extensively documented. Most questions can be resolved by reading the MySQL manual. We try to document everything to avoid getting too many questions on the MySQL mailing lists. The current manual has recently been improved considerably, thanks to the great work done by Paul DuBois.
Many small, extremely useful extensions that help you get your work done
Binary portable table format—it is now possible to copy MySQL table files between different architectures.
More and longer indexes—maximum is 32 which can be 500 bytes long (16/128 previously).
Even better index compression—it is faster and uses even less disk space.
Indexes on BLOB/TEXT columns just like a CHAR column.
Support for tables greater than 4GB on file systems which support files that big. The new limit is about 9 million terrabytes.
Has better fragmentation handling for the dynamic row format.
Added in-memory tables with hashed keys—an extremely fast way to have lookup tables.
Allows true floating-point columns with values such as 1.0E+10.
Includes example C code for a procedure that analyses the result from a SELECT.
Faster SELECT DISTINCT handling has been added.
Added much useful information in SHOW TABLE STATUS.
CREATE TABLE (...) SELECT * from a,c where something. This creates a table using data from a SELECT in one step. The data types and field names are automatically generated from the SELECT.
Removed the old limitation with big GROUP BY queries (with SQL_BIG_TABLES=0) that resulted in a “table is full” error.
Loads BLOBS from files with the LOAD_FILE function.
COUNT(DISTINCT) is supported.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
6 hours 14 min ago
- BASH script to log IPs on public web server
10 hours 41 min ago
14 hours 17 min ago
- Reply to comment | Linux Journal
14 hours 49 min ago
- All the articles you talked
17 hours 13 min ago
- All the articles you talked
17 hours 16 min ago
- All the articles you talked
17 hours 17 min ago
21 hours 42 min ago
- Keeping track of IP address
23 hours 33 min ago
- Roll your own dynamic dns
1 day 4 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?