I`m doing my embedded project with bf537 (on BF537-STAMP), with kernel 220.127.116.11.
Through digging over several drivers, LDD3 and other stuff, I realized that I`m totally confused.
The issue is that I`m writing network(ethernet) driver, using spi(existing spidev or over) driver. By now I have a driver, that is capable to create ethx interface and fill in information(all of these is fully described in LDD3). But what I can`t find, is the way, how I can use ANOTHER, existing spi driver. Probe() function gets struct platform_device, which(as i know) i fill in a special file, there rosources, devices and other staff is allocated and registerd, etc... So, if i have to get an access to spi(spi resources), i have to declare them, make a master....so to write a nes spi driver. But i want to use existing.
Also, looking at similar driver(bfin_mac) i found something useful, but still it creates MII master, it doesn`t take it from "somewhere in the kernel".
I understand, that what i wrote may be wrong, but that is what I found using a set of drivers from different versions of the kernel.
Could someone explain the mechanism driver ONE may use loaded in the kernel driver TWO?
Thank you very much!!
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?