Tuesday, June 28, 2011

Facebook develops its own storage


 “The first project of its kind at Facebook, we’re designing a next generation storage server, and we’re seeking a Storage Hardware Design Engineer to architect, specify and test critical system components.”

Well, storage is changing together all IT, and the Open Compute Project is a not the smallest part of these changes.

Thursday, June 23, 2011

Vancouver VMUG User Conference

June 21st 2011 I was in the Vancouver VMware User Group (VMUG) conference and there are some brief notes about things which had attracted my attention. Of course, I didn’t attend to all sessions, but some presented technologies and products are new for me and may be interesting for future use.

The VIO-4000 IOV Appliances from Virtensys remove the physical I/O adapters (Ethernet NICs, FC HBAs and SAS/SATA RAID adapters) and local disk drives from the servers within a rack, leaving them as stateless compute and memory nodes. The I/O adapters are pooled and consolidated within the VIO-4000 Appliance, which creates hardware-virtualized images of the I/O adapters (vNICs, vHBAs and vDisks) in each server. The vNICs, vHBAs and vDisks emulate the physical adapters and local disk drives. They don’t require changes to the servers, OS or applications. Multiple servers use and share the physical adapters concurrently. The VIO-4000 Series IOV Appliances connect to servers using a single x4 PCI Express® (PCIe) Gen2 cable per server that attaches directly to the server’s native PCIe bus. The PCIe cable replaces multiple Ethernet and FC cables per server. Finally, the VIO-4000 Appliance provides Ethernet and FC uplinks that connect directly to the corporate infrastructure without the additional layer of Top-of-Rack Ethernet and FC access switching. Using Remote Management Console it’s possibly to create virtual adapters, assign them to the servers, dynamically allocate available bandwidth to virual adapters, create LUNs from disks and assign them to connected servers. It seems that we can see very soon something looking as an old brave mainframe.


The Complete Cluster from Nutanix is a converged, scale-out compute and storage system, which uses Google File System. The building block for the system is a Nutanix Complete Appliance: 2U chassis with four servers, each running standard hypervisor and complete with CPUs, memory and local storage (SSDs and HDDs). True Scale-Out is enabled with a global data system architecture that integrates each new Nutanix Complete Appliancetogether into a single unified storage and compute cluster. Instead of tiering approach with compute, communication and storage layers we have set of small blocks with possibility to redistribute virtual resources between them.

Xsigo is another player on the market of multiprotocol directors and converged infrastructure. The Xsigo I/O Director is a hardware and software solution that gives you a simpler, converged infrastructure. Xsigo consolidates Ethernet and Fibre Channel connections to a single cable. So instead of deploying multiple I/O cards and cables to every server, Xsigo lets you create connections dynamically in software -- up to 64 isolated connections per device
EtherDrive arrays from Coraid with ATA over Ethernet (AoE) connection may be interesting when need relatively chip solution is needed, but 1Gbps iSCSI is not enough.

It sounds strange, but Falconstor NSS, which was one of the first storage in-band virtualization appliances, still doesn’t have a VAAI support L

Additional thanks to the EMC guys, who provided very useful labs. Sometimes one small hands-on lab can show much more than the big presentation.

And there’s a picture from the yard of River Rock Casino Resort, where conference was organized.
     

   

Thursday, June 16, 2011

Hitachi says goodbye to monolithic arrays

The storage platform roadmap from Hitachi


It seems that nobody left to develop and produce high-end monolithic arrays.

Thursday, June 9, 2011

Simple timestamps generation script

timestamp.bat
date /T >> timestamp.log
time /T >> timestamp.log
choice /D Y /T # seconds between timestamps
timestamp.bat

it may be useful to check provided RPO value