Meeting with Costa Rica's Minister of Science and Technology
I just had the pleasure of meeting with Guy F. de Téramond, who is the Minister of Science and Technology of Costa Rica. Don Guy, as he is referred to by his staff and others, is a serious Linux and free-software advocate. As ministers are appointed by the president to their cabinet-level positions, this puts a pro-Linux person very high up in the government.
I originally met him in June 2001, at the Costa Rica Linux User's Group (GULCR). At that time he really impressed me with his interest in Linux and with what I saw as a serious interest in bringing good internet connectivity to the general public. This meeting was a chance to fill in the blanks of my knowledge of his interests and commitments.
The meeting was in his office at the Ministry. When I arrived he was experiencing a computer problem: he had built a new kernel for his Mandrake-based Toshiba laptop and deleted the old kernel, before he had made sure the new one was in place and worked. He chalked this up to a learning experience and, after our interview, went back to doing an update from the Mandrake 8.1 CDs.
Guy is actually a research physicist who received his degrees in Paris between 1968 and 1977. He spent a year at Harvard and a year at Stanford. He is also a Guggenheim Fellow (1986), received a Fulbright Research Award (1983), National Prize Clodomiro Picado Twight (1979) and the Medal of the Association of Space Explorers (1997).
He has served as a full professor of Physics at the University of Costa Rica from 1982 until 2000, when he was appointed to his current position. He is also a member of the American Physical Society and a founding member of the National Academy of Sciences in Costa Rica. I could go on, but I think you have the general idea.
He was responsible for connecting Costa Rica to BITNET in 1990 and to the Internet in 1993. When asked “Why BITNET?”, he explained that at the time the Internet was primarily a US phenomenon for computer scientists, while BITNET was used for collaboration in other fields.
When he returned to Costa Rica he was convinced of the potential that this connectivity could bring to the country and was responsible for the creation of CRNet, a fiber router-based backbone linking all major academic and research institutions in Costa Rica. Going beyond the borders, he was involved in the interconnection of Nicaragua, Panama, Jamaica, Honduras and Guatemala to the Internet.
I asked him about his passions, and he said theoretical physics is first and computers and the Internet are second. He smiled and declined to comment on his third.
He saw CRNet as something he would create to get the needed connectivity and then go back to his work in theoretical physics. His knowledge and experience with computing got him tied up in this area more than he had planned. What he has done and continues to do, however, is a huge benefit to Costa Rica.
The first UNIX system in Costa Rica was an IBM RISC system. Back in 1991, Guy was in the first group of people that attended training on that system. Another attendee was Mario Guerra, who has become a UNIX expert and is also currently working at MICIT.
Guy soon became involved in getting other Caribbean nations connected to the Internet. This usually meant traveling to other countries, from Nicaragua to Jamaica, and helping them set up. In each case, the goal was to make them self-sufficient by teaching them what they needed to know. “We were working together with the people because if you go and make the connections, and nobody learns anything, they will become dependent on you, and that's not the idea”, he said. “That's the idea for a private company.”
“[With CRNet] we connected 24 universities and ten government offices”, he said. “We always had the strategy to work with the computer people because they will learn faster. Thus, even though there were very few people at CRNet, we [did] a lot.”
This work continued and included more efforts to increase the bandwidth available to Costa Rica, including a PanAmSat link and a downlink-only link. Mario worked on proxying to decrease the need for incoming bandwidth.
When he came to the Ministry in 2000, Guy wanted to continue this connectivity project but at the national level—that is, bring broadband connectivity to anyone who wanted it.
Costa Rica received a $1.2 million grant to start a DSL pilot project. This project offers DSL connectivity in five of Costa Rica's 240 phone districts. “We are just using the infrastructure that is there; just putting the logical elements on top of the fiber and then using the copper line”, he said. “DSL is fantastic technology because you are using what is there.”
“Now we are going into the second phase”, he added. “The second phase has two crucial parts. We are going to deploy 100,000 DSL lines in all 240 of the nation's phone districts.”
The first country in the world in per capita broadband connectivity (cable and DSL) is South Korea, number two is Canada and three is the US. With 100,000 lines, which is 10% of the copper lines in the country (and 2.5% of the total population), this effort will move Costa Rica up to number three.
The total cost of this effort will be about $60,000,000. This investment, while a huge amount for Costa Rica, is relatively small because DSL takes advantage of the fiber and copper infrastructure already in place. Also, deploying DSL helps local telephone districts because it substantially decreases the load on the switched lines by getting all the long-time internet connections moved off the switched network. “We are paying a lot of attention to scalable technology—not frame relay, ATM and the like”, he said.
Guy pointed out that this universal connectivity is happening because of the government monopoly on telecommunications service. ICE, the telephone company, is a government agency and adding this internet connectivity through DSL makes perfect sense. If, on the other hand, a private company were to offer internet connectivity, it wouldn't be universal because connectivity to areas of low population would not be profitable.
This is similar to what happened in the United States with the REA, which brought electricity to virtually everyone. Today, 97% of the households in Costa Rica have electricity (100% from natural sources), and the modern equivalent of the electricity effort is internet connectivity.
As Guy said, “You can do so much work from your office in your house and spend more time with your family if you have good communications.” In addition, the increasing numbers of Costa Ricans moving to population centers has stressed the transportation infrastructure and created pollution in, for example, San José. Thus, the ability to telecommute is important to the nation as a whole.
In addition to this fiber/copper infrastructure, they are experimenting with communications over the power line. The electrical distribution also is done by ICE so, for example, the existing fiber run along the power grid for monitoring will be made available as a backup for the telecommunications fiber links.
To support all this connectivity, there needs to be increased bandwidth to the Internet outside of Costa Rica as well. That brought the discussion to Arcos. Arcos is a ring-based system in the Caribbean that covers Mexico, the other countries of Central America, the Caribbean islands and Miami. It has a bandwidth of about a terabyte/second, and its structure is such that all the repeaters will be located on land, making an upgrade much less expensive than typical sea-based repeaters.
Arcos should be in operation by the time you read this article. Guy explains they decided to go with Arcos because it offers much less expensive bandwidth than did satellite links.
He also stressed that the project is to connect everyone: individuals, schools health centers and such, as well as banks, commerce and high-tech businesses.
There already has been a hearing on rates, and all the comments have been in the direction of reducing them further. (The proposal was $30/month for 64k DSL, $40/month for 128k and so forth, including the line and the ISP.) As ICE is a public agency, the rates will reflect cost of service rather than the need to show a profit.
The engineering part of the project has just been completed. The design is scalable and offers, for example, easy ways to implement security. They have been careful to follow ITU standards, so there will be no dispute over what they have specified.
The request for bids are about to go out for the additional equipment, and initial funding has already been approved by the government. The goal is to start deployment in January 2002.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide