Freenet Installation and Administration
By default, Freenet will store a maximum of 1,000 items in the data store and use a maximum of 500MB of hard drive space. The diskCache and dataStoreSize options control this. Note that, due to bugs, the actual amount of space used can easily go over this. Often files in the data store simply get lost and don't get deleted when they should be. Restarting the node periodically, every day or week, will clean out those lost files. Secondly, while transferring large files the node will store the whole file while it's in transit even if doing so makes the node go over the size limit. Hopefully by the time you read this article, these bugs will have been fixed, though the second problem is difficult to fix for various reasons.
After that we have the dataPath and dataPropertiesPath that control where the data store is (the default is the .freenet directory where you installed Freenet). The difference between the two is that dataPath is where the actual data gets put, and dataPropertiesPath controls is where the .inf files describing the data stored get put. In all but unusual cases the two will be the same value. You may want to set this path to another partition with more space available than the default. Any filesystem that supports long filenames is fine. Note that you can't use multiple partitions simultaneously, you'll have to run multiple nodes on different ports for that.
There is also an option to limit bandwidth used, bandwidthLimit. This limit applies to outgoing data only. Incoming data is not limited (see maximumConnectionThreads). For most people the 50K/sec limit is probably fine, but if you can, please increase it or disable the bandwidthLimit to about a third to a half of your outgoing bandwidth, about 50K/sec to 300K/sec for your average cable or DSL line. Note that the bandwidth limit does not apply to requests originating from the same machine.
Incoming bandwidth can be limited, to a degree, with the maximumConnectionThreads option. This option limits the maximum number of incoming connections from other nodes. Changing this changes the amount of bandwidth and memory used. Most people can probably leave it at its default of 50K/sec, but if you have memory restrictions (or excesses) you can reduce or increase it.
Note that there is currently no way to limit the amount of total data transfer in a given time period. So as of yet, there is no way to only allow, for example, 1GB of data transfer per month.
At the end of the configuration file are the options that control logging. A single log file is used, freenet.log, by default (controlled by logFile). The logging option controls the threshold of events logged. The default, “normal”, doesn't log any of the mundane details of your node sending and receiving data. The “minor” setting can often provide useful debugging information, however debugging is overkill. Setting logging to minor is probably your best bet.
Finally, there is one last option that's not in the default configuration file, informDelay. Before writing the node's address to informURL, the node will wait a default of 24 hours to make sure your node is stable. Your node must be running for 24 continuous hours before the informURL is notified. Initially, one of the big problems with the informURL system was that people would try Freenet for a few minutes, decide it wasn't worth it and remove the Freenet software. Meanwhile, the address of their nonfunctioning node was sent to informURL. When nodes tried to get a working address, almost every single address in informURL would be nonfunctioning. It got to the point where one of the big problems in getting a node up and running was finding another node!
If you plan on restarting your node once a day you'll have to change the informDelay to something different. Setting it to 0 by adding the line informDelay = 0 to the configuration file will disable the feature and is what you want. Otherwise don't touch informDelay, please!
Now that you've got your node up and running you want to see if it works, right? First try running:
If that works you'll see the message: Congratulations! You've fetched a Freenet key on the console after some debugging information. If it doesn't do anything for more then two minutes or so it's not working.
Next try the URL http://localhost:8081/KSK@Aardvark in your favorite web browser to see if fproxy is working. If it is, you'll see the page of Aardvark's Freenet Index come up.
Actually getting your node established in the network can be a little tricky. It should, and usually will, do so automatically, but it often helps to push the process along a little to speed it up. This is an optional procedure. There are two main parts to this: finding other nodes and getting known by other nodes. Note that if you're running a transient node the second part isn't important.
The first part is simple, use Freenet. Just take a look at some of the content, such as the above-mentioned Aardvark's Freenet Index (KSK@Aardvark) and gj's web page (KSK@webpages/gj_jump0).
The second part is a little bit harder. You can insert a random chunk of data (1K) into Freenet with:
dd if=/dev/urandom bs=1024 count=1 | freenet_insert -length 1024 CHK@
When your node inserts that data, other nodes will find out about your node from the SourceAddress on the packets of data.
In any case, remember that it will take a few days before your node becomes known and other nodes start to request data from you. So if nothing much seems to be happening—don't panic!
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide