Optimal home network advice

From home grid
Jump to: navigation, search

An optimal home network requires serious research & advice. The following is current as of 2012-06 but must be updated as of 2012-09 after the FM2 & A85 chipset socket & processor launch, plus other announcements re Hy-Fi & IEEE 1905 which may relieve the pressure on gigabit ethernet to move data around the house quickly (by permitting this data to move via existing cat3, coax & powerline IEEE 1901) protocols, whatever is available).

Where a gigabit guarantee to local transit exchange or better applies, the LAN host function indicated below may be co-located in a transit exchange or be hosted otherwise by third parties. Any such guarantee simplifies the home network & makes it more feasible to invest in high end terminals as described.

introduction: non-optimal home networks[edit]

Imagine a badly designed ad hoc network that slowly evolved over time looking something like this: Rural NS home network 2012.png

Terrible, isn't it? Absolutely no consistency of operating systems, drive types or video driver. Each box probably does four or five jobs (though not twenty or thirty) and drive sizes are so inconsistent that no drive can be easily repurposed. No interchangeable parts at all. RAM types, CPU sockets, drive interfaces, even video interfaces likely though they are not shown all require extremely specific replacement parts. No use is made of IEEE 1901 or IEEE 1905.1 to take the load off the wired ethernets, leaving lots of load on 802.11n. No machines capable of 802.11ac at all. Large drives (> 1000GB) almost all accessible only through a gigabit router (probably not fast) not even dual gigabit and certainly not 1905.1 resilient. This is not what should be deployed in SOHOs, offices, or homes after 2012. Maintaining this particular network has unpredictable costs, but more important, it can waste weeks of its owners' time without notice. The lack of interchangeability, especially in a rural area far from parts and relying on delivery of online orders, requires a great deal of expertise and research to resolve. Even where expertise is available cheaply for typical problems like coolers or memory upgrades or disk swaps, it simply amplifies the heterogeneous nature of the network and makes it probably less maintainable over time. This network is not professional to deliver, though it certainly would take professionals to keep it from collapsing.

That said, properly run, with data stored in multiple places and boot images restored to the relatively fragile boot drives using bootp etc., such a network can probably gronk along until 2014. At which point most of these boxes should be donated to a charity the donors don't like much.

"optimal" assumptions[edit]

The total cost of ownership for a SOHO should stay under $1000/year for all devices including mobile, exclusive of waste on roaming or mobile 3G/4G data or capped-use plans at home (a Canadian horror). Initial outlay should never be less than half of that and no device with clearly <10 year anticipated lifespan should usually be bought. That includes mobile devices & laptops & ultrabooks (though the TCO & total cost maintenance numbers here do not). If you can't see yourself using it in 2024 at least as an e-reader in your rocking chair, you should just not buy it at all. Exclusive of such acquisitions, $500/year should be sufficient to maintain a well designed network over that time, $1000/year maximum if major failures occur or unanticipatable shifts in technology (do these exist? ECG thinks not) or new opportunities that pay for the hardware arise.

This assumes a certain tech sophistication on the part of the users, some of which can be taught [1] but which requires constant improvement to terminology that generally becomes more practical as stable systems become more practical. The less sophisticated users should rely on more sophisticated peers, who can make maximum use of used parts in non-critical functions like part of a robust RAId storage, where power/performance permits.

not for A-tards or gamers[edit]

Good advice is never for everyone.

Re: OSX networks: It is entirely possible to buy Apple equipment used, keep using it for two or three years, then sell to A-tards for what you initially paid (assuming you maintained and dutifully upgraded the hardware in your care. This is generally not possible for PC hardware no matter how robust, though some brands do seem to have resale value. In what follows the AV-intensive purposes to which Apple equipment is usually put are not considered, nor those of gamers. However, there are AV versions of Linux competitive with Apple & OSX will run happily on any new 2012 machine.

Gamers by definition cannot use "optimal" systems, often feeling forced to upgrade on a whim or opportunistically. While a reasonably future-proof platform can accomodate a gamer for as much as four years, the six to eight years of reliable use an office worker or corporate user should anticipate (until power draw & noise obsoletes the box) will see escalation in the graphics technology required (notice the rapid evolution of dx10 and dx11 and quick obsolesence of many low end cards). A separate optimal game network advice page may evolve if the advice diverges sufficiently. However in most cases it's entirely possible to simply upgrade the integrated FM1/FM2/1155 graphics of a 2012 machine using a PCI-E x16 video card and keep pace with almost all games, though by 2020 any such machine specified in 2012 will not run something.

Assume expensive power[edit]

The TOC maintenance price per year should assume at least $0.20/kwH electricity as a North American average with $1.00/kwH on-peak for the period 2012-2025 or so. Properly future-proofed devices should remain performance-per-watt competitive to run for 8 years minimum, assuming expensive power may allow 10 or 12 years.

This argues strongly for <17W processor+graphics combinations on the terminal and extremely strongly for all power-conserving infrastructure including maximum use of short power over ethernet cables to eliminate wall warts from dc devices. Also the use of IEEE 1901 should eliminate most AC waste from the smarter devices that use it. Gaining these power advantages is worth some "bleeding edge" experiments especially for network professionals who will be setting up systems for spreadsheet-wise corporations, and increasingly working in a power-cost-sensitive, heat-sensitive, green-ness-sensitive, business environment.

Jurisdictions in decline, like Ontario or New Brunswick, have directly subsidized expensive dirty (nuclear, dam) consumption often at a cost >$100/resident/year, much more when subsidies to generation (including "green" generation) are included. Advice from residents of these jurisdictions must be downgraded by the decision-makers of places that wish to avoid this decline. Power cannot become or remain cheap without permanently destroying most prospects in/for the new/green/efficient economy. Even a home network built to assume <0.10kwH power or flat on-peak rates will be useless and have to be replaced by anyone who moves from a declining to a rising jurisdiction (like VA, TN, NS's Annapolis Valley or ME or NH) where bits are cheaper & power is expensive.

model transactions/s, transactions/joule, transactions/s/$[edit]

A reasonable framework for assessing performance vs. price including power [2] might state the following:

  • Mhash/J = millions hashes per joule (energy efficiency ; 1 joule of energy is spent for 1 watt in 1 second)
  • W = watt (maximum power consumption, i.e. energy per unit of time : 1 W = 1 J/s)

With increasing use of GPUs for processing, other terminology may also apply:

  • Clock (in MHz) refers to the Shader clock only with nVidia cards (not Core or Memory). With AMD card the shader clock is not separate, but is part of the GPU clock.
  • SP = Stream processors (Shader Units)

For large scale transactions on the Windows platform a framework like TPC's may apply. Hopefully this will expand soon to properly include power factors, or some useful instruments built on top of the TPC analysis may apply.

initial cash outlay[edit]

Pricing assumptions are that terminals must cost under $1000 for entry level, not counting monitors, no more than $1500 for high end systems with at least one >1080p (1200p, 1600p) attached, must be able to drive all these displays at 1080p 30fps simultaneously, and must put >100GB of SSD+RAM (of which at least 10% must be DDR3-1600 or better) within PCI-E x4 bus speed reach of a teraflop processor, with options to stripe that & double the speed again (using two PCI-E x4 slots perhaps sacrificing the ability for a 2nd video card). This minimum should be sufficient for the combination of all boot/swap/core applications. The inherent fragility of striping requires a "wipe and re-image" mentality similar to public libraries - all boots installed on the SSD must be thought of as in RAM.

Three or more terminals in heavy use with high value work require central support. A LAN host must be connected via two actual dual gigabit ethernet wires whatever other networking options (802.11abgn, IEEE 1901 or IEEE 1905 options over existing wire) can reach it, ganging these into a 2gbps connection (minus overhead) to be approximately the same speed as original 1.5G eSATA drives. This will be adequate data connection assuming the terminals have >100GB RAM. This host should cost about $1000 with one socket in use, $1500 with two (including cost of dual socket mainboard) and configured with 24GB of RAM (maximized on build). By no means should the hardware in any LAN host box (on which many users rely including usually employers of teleworkers) change lightly. It would ideally never change at all, except to add a second processor (when cheap) to a dual socket board. Like a router, the value of such a box is absolute stability over an 8 to 12 year lifecycle. A list of reasonable upgrades, preferably all performed at once halfway through its lifecycle, is listed below.

terminals with 100GB-120GB[edit]

As per assumptions above, a 2012 "terminal" puts 100GB-120GB of RAM within PCI-E x4 bus speed reach of a teraflop processor, with options to stripe that & double the speed again (using two PCI-E x4 slots perhaps sacrificing the ability for a 2nd video card). By 2016 it may be practical to increase that to a x8 or x16, doubling or quadripling boot & application throughput. Reducing latency is more important than increasing the size of the boot image assuming that hybrid ssd storage is the only use of the SATA6 bus (which should be reserved for this) and RAM sufficient.

Regarding vendors of terminal mobo+CPU+GPU, as a general rule if power is expensive & CPU power not an issue & graphics performance must be high at low watts, go AMD. If price insensitive, i.e. if any outages are extremely costly, or if network or VM performance must be truly optimal, go intel Xeon. If neither set of rules applies it's a horse race & at least one option from each (preferably two intel & one AMD option per terminal type) must be reviewed. Once decided, stick with that socket & mobo vendor for 4-6 years minimum. If you can't then you made a serious error. Anything purchased should remain useful to someone for 10 years.

Terminal mobos must have one PCI-E x16 & two more x4 or better slots, preferably two PCI-E x16 & two PCI-E x4 with at most one legacy PCI slot. "Two PCI-E x16 are not enough, "for graphics cards" grossly understates their usefulness. Intel's dual gigabit LAN cards are PCI-E x16 so if you want to talk to a NAS or other remote storage at top speed, or want Thunderbolt or 10GigE interface (probably both essential for graphics workstations now) you must assume that one x16 slot will be for networking or perhaps faster drive controllers. PCI-E x4 SSd is now very cheap ($200 for 100GB is common even for Sandforce 1200 controlled multi-drive addressed units that can act as a RAId0 to double boot drive throughput beyond what SATA6xRAId0 can do*). So a properly designed box that addresses the real bottlenecks (boot drive/cache/swap, LAN to NAS) must have at least two x8 or x4 slots beyond its PCI-E x16. With good enough audio on the mainboard, USB & Ethernet replacing Midi among musicians, audio integrated in the Hdmi working, why waste two full slots to accomodate PCI audio? A legacy PCI can accomodate a second GigE LAN card to boost a one-RJ45 board to two slots but I'd have no use for it. One such slot is enough. The other one should be a x4. Hope mainboard makers get this & start to remove legacy PCI from FM2 boards. Anyone who thinks they will get true ssd performance out of on-board Marvell (or worse Realtek) controllers on SATA ports is clueless. Especially with 6 running at once. That's a fine RAId6 data array, but a very poor boot drive choice." - Craig Hubley

Terminals should avoid spinning metal storage. As of 2012 PCI-E x4 SSD with good controller (Sandforce 1200, configurable as stripe boot) was < $2/GB in 100GB-ish configurations. Onboard SATA6 is suitable only for replicated data storage. It must be backed up at least daily (for work, continuous) to a host NAS.

This can be a simple box reached via gigabit LAN or a true LAN host. If there are more than three terminals to serve or its an office, consider buying a true host (see below).

how to upgrade terminals[edit]

Any need for more or faster local storage requires a hybrid ssd on the terminal or SATA6 RAId. This should be the only use made of the SATA6 interface with extreme caution taken re use of raid (having boxes with identical mainboards & configurations onsite is essential as a dead RAid cannot be accessed any other way).

USB3 is strictly for transitory mobile removable drives/devices. If the terminal supports power over Ethernet this will allow support of VoIP & other extremely useful desktop & mobile support devices like a wireless AP. This should obviate any perceived "need" for Wi-Fi transceivers in the box, or (worse) USB networking.

Any investment in these upgrades must be for durable long-lived devices with expected usefulness of 8-12 years or more (PoE being an extremely stable interface).

LAN host[edit]

For one or two users, an OpenWRT or high end (ASUS?) proprietary router with a USB 2.0 drive port, plus a gigabit-accessible media drive, may be enough to back up user data files & share canned media & installation executables/disks. Two very heavy users with high throughput needs, or three or more users whose work is very valuable or who work on tight deadlines, will find it necessary sooner rather than later to segment public vs. private networks, dual-host most boxes and add a LAN host on the internal segment, removing file service & backup duties from the router (which will continue to face the slow open Internet segment or side).

Or, alternatively, the LAN host can do both duties with a trusted well-managed firewall, relegating the router to wireless connectivity only and freeing it to be located in the optimal wireless networking spot (rather than at the nexus of wires).

BSD, typically FreeBSD, is as of 2012 still likely the best OS for such a LAN host, in either role. If Linux is the primary OS for the terminals then it should also be used, for simplicity, on a self-maintained LAN host. Similarly if Windows Server or OSX experts are at hand, those can be used as well. As end users do not typically interact with this OS, it may be a matter of taste for the LAN administrator to choose a particular OS for such host(s).

A dual gigabit controller is a must, ideally Intel or another high performing NIC interface. A second pair of gigabit controllers on a PCI-E board ($150) is optimal to permit up to 4gbps connection to one router / switch or 2gbps to a workstation + 1gbps to another + 1gbps to a single-gigabit NAS. USB3 & SATA6 also necessary interfaces on this host plus sufficient PCI-E for a Sanforce PCIe boot & cache drive (as per the terminals above).

main[edit]

If possible to wait until 2013, significant changes to LAN host mobos are likely. ASUS in particular is testing the market for variants with no-slot twin GPUs, X79 NB chipset, NO PCI slots, two Thunderbolt ports, eight DIMM slots allowing for up to 128GB of DDR3, 10 SATA 6G ports with hardware RAID & 12 USB 3.0 ports. This is a viable LAN host box only if Thunderbolt becomes a very robust networking & ssd interface & DDR3 RAM gets cheaper.

Intel LGA1155 & AMD FM1 were most viable LAN host sockets as of 2012-06. Whatever socket terminal use must be at least strongly consiere for the LAN host too

Price may favour compromises especially as limited graphics are fair, low watts critical for a LAN host. To rely on no slots whatsoever may be an optimal solution.

As of 2012 another viable stable intel socket available for hosts was LGA 1366. A dual socket offers the most redundant support practical for LANs, and Supermicro among the most stable boards. A dual gigabit using Intel NIC chips saves one PCIe x8 or x16 slot & $150+ which would otherwise have to go to an intel dual gigabit NIC [5] or 10 gigabit NIC or dual 10 gigabit NIC [6] still >$700 [7] to talk to NAS.

  • Supermicro X8DTL-3-O Intel 5500 Dual LGA1366 ATX 12" x 10", Dual Intel® 82574L Gigabit

[8]

storage: keep the spinning metal centralized[edit]

Spinning metal should be concentrated in the LAN host & drives moved to less critical or more redundant (raid) roles as they age. Hybrid ssd & a small PCI-E x4 SSd (equal to the terminal's) are practial as starting storage. Two of the terminal ssd class are more practical than a larger ssd in the LAN host because of the value of keeping spare parts for the terminals. Also as of 2012-06 extreme storage (2TB PCI-E x8 for US$3500+) [9] was impractical but as prices fall these may be viable upgrades. It is not unreasonable to anticipate 1TB RAM of fast (Kingston, intel, Sandforce) PCI-E x8 in a LAN host for under $800 as a viable upgrade by 2016, perhaps 500GB of x4 as a viable upgrade for $300 as early as 2014 & 250GB of x4 for $300 in 2013 (about 50% more for 120% more storage than the best PCI-E x4 pricing of 2012-06).

what to upgrade, preferably all at once[edit]

Like a router, the value of a LAN host is absolute stability over an 8 to 12 year lifecycle. Only the following safe upgrade paths should usually be considered:

  • Add a second processor (when cheap) to a dual socket board - be careful, there are few insert cycles on modern chips...
  • Add second PCI-E ssd (when cheap), or two, or swap to liberate one to equip another terminal with the identical model as was used in the earlier terminals i.e. as spare)
  • (at end of lifecycle, or on major reconfiguration or failure)
    • add graphics PCI-E x16 & repurpose as workstation (if performance per watt allows) or display support system for less-used room (boardroom) if flops/watt poorer
    • remove excess storage & repurpose as router only
    • remove excess networking & reporpose as NAS only