Tuesday, December 27, 2011

Top 5 cloud storage trends of 2011

The top 5 cloud storage trends of 2011 reflect a young market searching for acceptance while the number of options and features continued to expand. Like many emerging technologies, hype still outpaces reality, but reality is hustling to catch up.

Cloud storage trend 1. Cloud washing is still an issue. Confusion remains about what makes up a storage cloud, mainly because of cloud washing by those vendors trying to position legacy products and services as "cloud." As a result, many data storage administrators spent the year trying to understand the differences among public, private and hybrid clouds, and why any of those are different than the storage-area network (SAN) or network-attached storage (NAS) they’ve been running for years.

Confusion caused by cloud washing was evident at a September 2011 Storage Decisions event in New York City, where several analysts tried to dissect private storage clouds. “What is [private] cloud storage?” asked Howard Marks, chief scientist at DeepStorage.Net, during his presentation on building your own private cloud or hybrid storage cloud. “Now, it’s anything the guy who has a product wants it to be.”


Cloud storage trend 2. Struggle to define a storage cloud continues. As a result of cloud washing, vendors and analysts spent the year trying to narrow down the key functions and features that make up a storage cloud.

The industry agrees on a few things. A true cloud has to have a highly scalable, elastic and virtualized infrastructure. Object storage is the main storage technology because it allows for massive scalability and elasticity of dozens of petabytes, and even exabytes, of data. In addition, analysts say clouds have to be geographically aware so that objects and files aren't location dependent. Cloud storage is accessed directly across the internet via APIs such as REST or SOAP that will be standardized as the Cloud Data Management Interface (CDMI). Multi-tenancy for security and chargeback are also considered key cloud functions.


Cloud storage trend 3. Cloud storage gateway vendors extend capabilities. In 2010, a group of startups launched cloud storage gateway appliances to help organizations move primary storage onto the cloud. These products were refined and expanded in 2011 as vendors tried to make them more competitive with traditional storage.

This year, TwinStrata Inc. extended the capabilities of its CloudArray gateway device by adding support for on-premises SAN and NAS, direct-attached storage (DAS) services and private clouds. StorSimple Inc. bumped up its iSCSI appliances, and added fully redundant components and the ability to upgrade without disruption. Nasuni Corp. added multisite capabilities for its NAS filer appliances, allowing multiple controllers to have live access to the same volume of snapshots through cloud service providers. Nasuni also added a service-level agreement capability that guarantees close to 100% uptime for its Nasuni Filer NAS cloud gateway. “We’re a storage services company now,” Nasuni CEO Andres Rodriguez said.


Cloud storage trend 4. Cloud storage is still mainly for backup. So far, at least from a storage perspective, the cloud has been most useful for simplifying the backup process. Cloud storage backup can be an effective alternative to tape for protecting data in remote sites and branch offices.
For example, the Los Angeles Unified School District’s (LAUSD) facilities division placed TwinStrata CloudArray Virtual Appliances in 15 remote locations across the district to back up 80 TB of primary storage to the Amazon Simple Storage Service (S3) cloud. LAUSD estimates it will save $283,000 over five years, mostly from eliminating tape and moving to low-cost commodity servers.
Psomas, a Los Angeles-based engineering consulting firm, turned to Riverbed Whitewater backup gateways, which provide local-area network (LAN)-type access to public cloud storage for data protection. Psomas replaced tape at 11 sites with one Whitewater cloud storage gateway and 10 virtual appliances. The company estimates it reduced backup costs by approximately 40% since it started rolling out cloud devices in March.


Cloud storage trend 5. Cloud archiving another use of the cloud. We’ve seen petabyte-size digital archives move to the cloud in recent months as cloud archiving has become another use of the cloud. The University of Southern California (USC) contracted with Nirvanix Inc. to establish one of the world’s largest private storage clouds -- 8.5 petabytes (PBs) of digital archive spread over two sites.
The Ecole Polytechnique Federale de Lausanne (EPFL) is using an Amplidata AmpliStor AS20 object storage system to build a 1 PB active archive of more than 5,000 classic jazz concert performances from the Montreux Jazz Festival.

By: Sonia R. Lelii, Senior News Writer

Thursday, December 22, 2011

Faster, More Reliable Storage For Backup & Data Recovery

Through its backup and disaster recovery offerings, solution provider Servosity provides a very important service for managed service providers and IT resellers. The company’s backup services are deployed using cloud infrastructure, which requires servers and storage that can handle the mission-critical data collection and protection needs of multiple clients. For Servosity, reliable storage is a No. 1 concern, and the company was in need of a solution that could solve a number of difficult problems.
“We had some very tough challenges,” says Damien Stevens, founder and CEO of Servosity (www.servosity.com). “We needed 24/7/365 operations with no maintenance windows or unplanned downtime [and] unlimited growth capabilities, and we needed the speed of SAS drives and the sophistication of an Enterprise SAN—and we needed all that for far less the cost of an enterprise SAN or even SAS drives. We needed SAS performance and reliability on a SATA budget.”
Choosing The Right Solution

Servosity Boosts Its Backup Solutions With Aberdeen’s AberSAN ZXP

Servosity knew it wanted to take advantage of Nexenta’s NexentaStor NAS/SAN software platform, but it also knew it would need the right hardware to run it. That’s why Servosity opted for two of Aberdeen’s AberSAN ZXP High Availability ZFS SAN units and one 45-bay JBOD in order to handle the NexentaStor XZFS file system with 144TB of licensing. Servosity chose Aberdeen’s AberSAN solution for its versatility as well as its ability to work well with the NexentaStor platform.
“The ZFS file system is stable, scalable, and fast,” Stevens says. “On top of that, Nexenta’s NexentaStor commercial offering adds additional features as well as enterprise support, and Aberdeen is one of the few vendors that can custom-build a hardware solution designed specifically for NexentaStor. It’s the perfect marriage of hardware and software.”
Aberdeen’s AberSAN ZXP product is designed to be highly scalable and has a variety of other customizable options. For instance, the base solution is a 2U head unit, but there are options for 3U and 4U JBOD expansions, as well. There are also multiple power supply, OS, and storage options so companies can build the right model for their needs. It’s this customization that allowed Servosity to deploy these products and know they would perform as expected.
Implementation & Results
Stevens admits that Servosity had some unique needs from the start and that it would be difficult to match them, so he knew there would be challenges along the way. “With some of these technologies, we’ve been a very early adopter,” Stevens says. Because of this, Servosity went through an extensive testing phase to find the right balance of hardware and software to get the performance it needed. He says Servosity wouldn’t have been able to create the perfect solution to its problem without the constant support of Aberdeen, which included the company sending evaluation units to Servosity and providing advice and expertise that aided the company in building the perfect solution for its needs.
“With a sophisticated system spanning multiple data centers and huge amounts of storage, there’s much more to it than buying a product,” Stevens says. “The difference for us has been Aberdeen and Nexenta working together with our engineers to custom-build and then custom-tune a system that meets our needs and has exceeded our expectations.”
Once the entire system was installed, Servosity started seeing the benefits. Stevens says that traditional systems such as the one the company installed are often at risk for “silent data corruption,” which happens naturally over time with a constant flow of data. To prevent this, ZFS systems self-heal when a disk presents corrupt data, which protects against hardware failure and prevents the hardware from corrupting the data, according to Stevens. In addition to preventing data corruption, Servosity is pleased that the system can use solid-state drives to cache for SATA and SAS drives to make the system faster for a lower cost.
Using Aberdeen’s Products Now  & In The Future
Overall, Servosity is extremely happy with its decision to invest in Aberdeen’s AberSAN solution and plans on using Aberdeen’s products in the future. In fact, Stevens says that Servosity’s projections “show a need for 400% of our current infrastructure in the next 12 months.” To help meet the demand, Servosity will continue to use Aberdeen for its highly customizable and scalable Nexenta products in order to help the company continue to grow.
The AberSAN is a highly customizable and versatile storage solution that incorporates Intel® Xeon® 5600/5500 Series processors and is designed to be both scalable and safe. Aberdeen offers a variety of configuration options and will work with companies to fine-tune the solution to meet their specific needs. The AberSAN uses a ZFS file system to avoid traditional data corruption issues and prevent stored information from being irreparably damaged or lost.

Wednesday, December 21, 2011

What is Extranet?

An extranet is a private network that uses Internet technology and the public telecommunication system to securely share part of a business's information or operations with suppliers, vendors, partners, customers, or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with other companies as well as to sell products to customers.


An extranet requires security and privacy. These can include firewall server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network.

Companies can use an extranet to:
  • Exchange large volumes of data using Electronic Data Interchange (EDI)
  • Share product catalogs exclusively with wholesalers or those "in the trade"
  • Collaborate with other companies on joint development efforts
  • Jointly develop and use training programs with other companies
  • Provide or access services provided by one company to a group of other companies, such as an online banking application managed by one company on behalf of affiliated banks
  • Share news of common interest exclusively with partner companies

Tuesday, December 20, 2011

What is 3G anyway?

3G (third generation of mobile telephony)

3G refers to the third generation of mobile telephony (that is, cellular) technology. The third generation, as the name suggests, follows two earlier generations.

The first generation (1G) began in the early 80's with commercial deployment of Advanced Mobile Phone Service (AMPS) cellular networks. Early AMPS networks used Frequency Division Multiplexing Access (FDMA) to carry analog voice over channels in the 800 MHzfrequency band.

The second generation (2G) emerged in the 90's when mobile operators deployed two competing digital voice standards. In North America, some operators adopted IS-95, which used Code Division Multiple Access (CDMA) to multiplex up to 64 calls per channel in the 800 MHz band. Across the world, many operators adopted the Global System for Mobile communication (GSM) standard, which used Time Division Multiple Access (TDMA) to multiplex up to 8 calls per channel in the 900 and 1800 MHz bands.

The International Telecommunications Union (ITU) defined the third generation (3G) of mobile telephony standards IMT-2000 to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM could deliver not only voice, but also circuit-switched data at speeds up to 14.4 Kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater speeds.

However, to get from 2G to 3G, mobile operators had make "evolutionary" upgrades to existing networks while simultaneously planning their "revolutionary" new mobile broadband networks. This lead to the establishment of two distinct 3G families: 3GPP and 3GPP2.

The 3rd Generation Partnership Project (3GPP) was formed in 1998 to foster deployment of 3G networks that descended from GSM. 3GPP technologies evolved as follows.

• General Packet Radio Service (GPRS) offered speeds up to 114 Kbps.

• Enhanced Data Rates for Global Evolution (EDGE) reached up to 384 Kbps.

• UMTS Wideband CDMA (WCDMA) offered downlink speeds up to 1.92 Mbps.

• High Speed Downlink Packet Access (HSDPA) boosted the downlink to 14Mbps.

• LTE Evolved UMTS Terrestrial Radio Access (E-UTRA) is aiming for 100 Mbps.
GPRS deployments began in 2000, followed by EDGE in 2003. While these technologies are defined by IMT-2000, they are sometimes called "2.5G" because they did not offer multi-megabit data rates. EDGE has now been superceded by HSDPA (and its uplink partner HSUPA). According to the 3GPP, there were 166 HSDPA networks in 75 countries at the end of 2007. The next step for GSM operators: LTE E-UTRA, based on specifications completed in late 2008.

A second organization, the 3rd Generation Partnership Project 2 (3GPP2) -- was formed to help North American and Asian operators using CDMA2000 transition to 3G. 3GPP2 technologies evolved as follows.
• One Times Radio Transmission Technology (1xRTT) offered speeds up to 144 Kbps.

• Evolution Data Optimized (EV-DO) increased downlink speeds up to 2.4 Mbps.

• EV-DO Rev. A boosted downlink peak speed to 3.1 Mbps and reduced latency.

• EV-DO Rev. B can use 2 to 15 channels, with each downlink peaking at 4.9 Mbps.

• Ultra Mobile Broadband (UMB) was slated to reach 288 Mbps on the downlink.
1xRTT became available in 2002, followed by commercial EV-DO Rev. 0 in 2004. Here again, 1xRTT is referred to as "2.5G" because it served as a transitional step to EV-DO. EV-DO standards were extended twice – Revision A services emerged in 2006 and are now being succeeded by products that use Revision B to increase data rates by transmitting over multiple channels. The 3GPP2's next-generation technology, UMB, may not catch on, as many CDMA operators are now planning to evolve to LTE instead.

In fact, LTE and UMB are often called 4G (fourth generation) technologies because they increase downlink speeds an order of magnitude. This label is a bit premature because what constitutes "4G" has not yet been standardized. The ITU is currently considering candidate technologies for inclusion in the 4G IMT-Advanced standard, including LTE, UMB, and WiMAX II. Goals for 4G include data rates of least 100 Mbps, use of OFDMA transmission, and packet-switched delivery of IP-based voice, data, and streaming multimedia

Friday, December 16, 2011

Seagate matches and raises WD disk warranty cuts

'We're just being consistent with the industry'
Seagate is cutting most Barracuda and Momentus warranty periods down to one year with others moving from five-year warranties to three.

Following on from Western Digital cutting some of its warranty periods to two years, we learn that Seagate is going further. In a letter to its authorised distributors, dated 6 December 2011, the company writes:
 
Effective December 31, 2011, Seagate will be changing its warranty policy from a 5 year to a 3 year warranty period for Nearline drives, 5 years to 1 year for certain Desktop and Notebook Bare Drives, 5 years to 3 years on Barracuda XT and Momentus XT, and from as much as 5 years to 2 years on Consumer Electronics.
The details of the new warranty periods are:-
  • Constellation 2 and ES.2 drives: 3 years
  • Barracuda and Barracuda Green 3.5-inch drives: 1 year
  • Barracuda XT: 3 years
  • Momentus 2.5-inch (5400 and 7200rpm): 1 year
  • Momentus XT: 3 years
  • SV35 Series - Video Surveillance: 2 years
  • Pipeline HD Mini, Pipeline HD: 2 years
Mission-critical and retail products are not affected by this change. The new warranty periods will apply to shipments from 31 December.

Seagate says it is standardising warranty terms "to be more consistent with those commonly applied throughout the consumer electronics and technology industries. By aligning to current industry standards Seagate can continue to focus its investments on technology innovation and unique product features that drive value for our customers rather than holding long-term reserves for warranty returns".

Possible translation: Seagate needs to switch some warranty funding into product development. ®

By Chris Mellor

Monday, December 12, 2011

Using table salt to increase hard drive data density

A team has discovered that, by using table salt, they can make hard drives even denser, possibly producing 6TB per hard drive platter.
Dr Joel Yang at the Institute of Materials Research and Engineering (IMRE) has discovered a way to increase the data density of a drive to 3.3 Terabit/inch2, meaning that it will be possible to manufacture hard drive platters offering 6 TB of storage. Surprisingly the secret ingredient in producing these high-capacity drives is sodium chloride, or rather, your common table salt.

Electron Microscopy images of 1.9 and 3.3 Terabit/inch2 densities
Microscopy images of 1.9 and 3.3 Terabit/inch2 densities
"Conventional hard disks have randomly distributed nanoscopic magnetic grains - with a few tens of grains used to form one bit – that enable the latest hard disk models to hold up to 0.5 Terabit/in2 of information,"


IMRE explains in a press release. "The IMRE-led team used the bit-patterned media approach, where magnetic islands are patterned in a regular fashion, with each single island able to store one bit of information."
Manufacturers currently use tiny grains of around 7 to 8-nm in size deposited on the surface of storage media. A single bit of data is stored in a cluster of these grains and not in any single grain. However Dr. Yang managed to store the same amount of information on a single grain the size of 10-nm. Thus, replacing several 7-nm grains with one 10-nm grain saves space and allows for denser storage capacities.
In addition to the higher capacity, the IMRE also reveals that the new method can be added to existing lithography processes thanks to a secret ingredient: tablet salt.
"The secret of the research lies in the use of an extremely high-resolution e-beam lithography process that produces super fine nano-sized structures," IMRE reports. "Dr Yang discovered that by adding sodium chloride to a developer solution used in existing lithography processes, he was able to produce highly defined nanostructures down to 4.5-nm half pitch, without the need for expensive equipment upgrades."
Dr. Yang said that the salt-based method has achieved data-storage capability at 1.9 Terabit/in2, though bits of up to 3.3 Terabit/in2 densities were fabricated. Further research and development is aiming to achieve 10 Terabit/inch2 in the future, but don't expect drives using the salt-based process to appear for another two years if not more.

Ref: tomshardeware

Transitioning from tape to a disk backup appliance

When an organization discovers just how well disk-based backups overcome the challenges that have long been a part of tape backups, it is tempting to say “out with the old and in with the new.”

However, making the switch from tape backups to a disk-based backup appliance requires a lot more planning than might be expected. You have to account for a number of different factors, including your ability to restore data that was originally backed up to tape.

Lab test your new backup solution
When you purchase a disk backup appliance, the first thing that I recommend is to test it in a lab environment. It may be tempting to connect the new backup solution to your production network and immediately begin using it for backups, but doing so can cause problems.

The reliability of using a new backup appliance to back up a production network should be a main concern. If the backup does not work in the way that you expected (which can easily happen due to a configuration error) then you might end up in a situation in which no usable current backups exist.

That’s why I recommend setting up a lab environment that shares a similar configuration to your production network. This process allows you to set up, configure and thoroughly test your new backup solution prior to placing it on the production network.

It is important to test the new backup system, rather than trying to connect the new system and the existing backup system simultaneously. The reason is that when you run two different backup solutions against the same servers, the backup products will usually fight with one another. In the case of file servers, both solutions may attempt to manipulate the archive bit on your files. This bit is often used to determine what needs to be backed up. In the case of Exchange servers, one backup solution may process and then purge transaction log files before the other backup solution is able to make a backup.

As you perform your tests, keep in mind that there will likely be a period of coexistence when both your old and new backup systems exist on your network (although you will have to suspend your old backup jobs to avoid the problems mentioned earlier). You must take this situation into account as you test your new backup system, and make sure that the agents do not interfere with one another.

Determine your backup needs
The next step in the process is to determine your backup needs. You must review your existing backup logs to determine what is currently being backed up, and whether any changes need to be made. In doing so, it is easy to assume that you can ignore the current backup schedule since disk-based backups tend to perform backups in near real time. However, the backup schedule may reveal the unexpected. For instance, many companies are required to create and retain quarterly archives. Such archives must be kept separate from the regular backups.

Decide what to do with your tape hardware
The third step in the process is deciding what to do with your existing tape hardware. Getting rid of your old tape drives is not an option, because you have to be able to restore backups that were created with it.

One option is to connect your tape drive to your disk backup solution. That way, you can periodically dump your disk backups to tape for long-term data retention. Of course, if you are ever asked to restore any of your old tape backups, you will likely have to reconnect the tape drive to a computer that is running your old backup software.

This brings up another point: Be sure to keep an up-to-date copy of your old backup software in a safe place. In order to restore an archive tape, you will need to have a copy of the software that was used to create the archive.
 
Evaluate your long-term tape retention requirements
The next step in the process is to evaluate your long-term tape retention requirements. You probably have a mountain of tapes stored at an offsite facility. Eventually, you will probably be able to get rid of (or overwrite) some of these tapes but you will have to determine how long the tapes must be retained in order to meet your recovery goals.

Make the transition
Once you have thoroughly tested your new disk backup appliance and determined the impact of the transition on your network, it is time to move the new backup system from the lab to the production network. After doing so, don’t forget about your old backup software. Some administrators like to leave the old backup software installed in case they have to restore a file or even revert back to the old solution because of an unforeseen problem. While there is nothing wrong with leaving the old backup software installed, you do need to cancel or suspend the backup jobs so that they do not interfere with your new backup solution.

Make your first backup
Once your new backup solution is in place, you will have to run a full backup. Most of your future backups will be incremental, but your first backup will have to be a full backup which can take additional time to complete.

After your first backup is done, you must review the backup logs for any signs of trouble. You should also thoroughly test your ability to restore the data that has been backed up.

Monitor your backups
You should monitor your new backup solution’s disk space consumption rate and watch the logs for any sort of errors that may occur over the next several months.

 By: Brien M. Posey

Friday, December 9, 2011

iSCSI (Internet Small Computer System Interface)

iSCSI is Internet SCSI (Small Computer System Interface), an Internet Protocol (IP)-based storage networking standard for linking data storage facilities, developed by the Internet Engineering Task Force (IETF). By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. The iSCSI protocol is among the key technologies expected to help bring about rapid development of the storage area network (SAN) market, by increasing the capabilities and performance of storage data transmission. Because of the ubiquity of IP networks, iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.

How iSCSI works:
When an end user or application sends a request, the operating system generates the appropriate SCSI commands and data request, which then go through encapsulation and, if necessary, encryptionprocedures. A packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a packet is received, it is decrypted (if it was encrypted before transmission), and disassembled, separating the SCSI commands and request. The SCSI commands are sent on to the SCSI controller, and from there to the SCSI storage device. Because iSCSI is bi-directional, the protocol can also be used to return data in response to the original request.

iSCSI is one of two main approaches to storage data transmission over IP networks; the other method, Fibre Channel over IP (FCIP), translatesFibre Channel control codes and data into IP packets for transmission between geographically distant Fibre Channel SANs. FCIP (also known as Fibre Channel tunnelingor storage tunneling) can only be used in conjunction with Fibre Channel technology; in comparison, iSCSI can run over existing Ethernet networks. A number of vendors, including Cisco, IBM, and Nishan have introduced iSCSI-based products (such as switches androuters).

Five layers of iSCSI storage connection security

iSCSI as a storage protocol is noted for its administrative simplicity. If an admin knows TCP/IP – and what self-respecting admin doesn’t – then they possess most of the knowledge they need for success in managing iSCSI connections.
iSCSI also benefits from its choices in hardware. In most cases, the very same Ethernet cables and networking equipment lying around will work swimmingly for passing iSCSI traffic.

With all this going for it, the iSCSI protocol for connecting servers to storage systems seems like a shoo-in for primacy across IT environments. But it does have a dark side, one that too often gets neglected in the rush to spin up new servers.

That dark side is iSCSI’s options for securing those connections. Unlike direct-attached storage where connections never leave a server’s chassis and, unlike Fibre Channel’s entirely separate and single-use connectivity infrastructure, iSCSI’s use-it-on-the-network-you-already-have value belies a range of security problems that aren’t obvious to resolve.
But solutions do exist to secure iSCSI network connections. What’s particularly interesting is, these solutions can be layered atop each other to create whatever depth of security data needs. Admins should consider this carefully as they ponder the iSCSI exposure in their environment.

Layer No. 1: Segregating ACL networks. iSCSI’s first layer of protection against prying eyes does not happen within the iSCSI protocol itself. Instead, the architecture that is created in support of iSCSI should be constructed in a way that iSCSI traffic is segregated from other traditional networking.

Necessary for security as well as to assure assurance performance, that segregation can be physical by establishing paths through separated networking equipment. It can also be logical through the use of VLANs and access control lists (ACL) at the network layer. Configuring Layer 3 (IP-based) ACLs on network equipment ensures traffic is routed appropriately. It also has the effect of masking out iSCSI LUNs that shouldn’t be globally accessible. iSCSI operates by default over TCP port 3260, which introduces the notion that Layer 4 (port-based) ACLs can also provide added security.

Layer No. 2: CHAP authentication. ACLs may ensure iSCSI traffic correctly navigates to the right hosts, but it does nothing for authenticating servers to storage. That process is handled through the protocol’s Challenge-Handshake Authentication Protocol (CHAP) support, using one of two possible configurations. In the first, one-way CHAP authentication, the iSCSI target on the storage authenticates the initiator at the server. Secrets, essentially iSCSI passwords, in one-way CHAP authentication are set on the storage. Initiators on any incoming servers must know that secret to initiate a session.

A second and slightly better option is mutual CHAP authentication, where the iSCSI target and initiator authenticate each other. Separate secrets, one for each half of the connection, are used in this configuration. Each half must know the secret of the other for a connection to initiate.
Layer No. 3: RADIUS authentication. RADIUS, or Remote Authentication Dial-In User Service, has long been associated with telephony and modems. This service has evolved to become a standard for authenticating other protocols as well. Both the iSCSI initiator and the target authenticate not to each other, but to the RADIUS server.

Centralizing authentication to a third-party service improves the management of security. Configuring secrets with CHAP authentication requires entering their character strings across numerous servers as well as storage connections. That distribution of password data introduces the opportunity for mistakes. Just keeping track of the sheer volume of passwords can expose vulnerabilities. Centralizing authentication to a RADIUS server reduces the effort, which diminishes these administrative risks.

Layer No. 4: IPSec Authentication. Unfortunately, CHAP authentication all by itself isn’t a terribly strong mechanism for securing connections. CHAP is reportedly subject to offline dictionary attacks, enabling a persistent attacker to eventually guess the password through brute-force means. It is for this reason that using random strings of letters and numbers is the recommended practice for creating CHAP secrets. Even RADIUS itself is merely a service for managing CHAP passwords, which insinuates that its implementation doesn’t add much to security beyond the aggregation of secrets.

These factors suggest that truly securing connection authentication requires a different approach, one that isn’t limited by CHAP’s vulnerabilities. IPSec can serve those stronger authentication needs. IPSec operates at the IP packet layer, giving it the full functionality of the TCP/IP stack. IPSec authentication can occur using pre-shared keys (similar to CHAP), but it also supports stronger frameworks like Kerberos and certificate-based authentication.

IPSec’s biggest limitation is in its support on storage devices. Whereas CHAP authentication is more closely associated with iSCSI connections, IPSec functionality may not be part of the storage OS’s feature set. Consult the manufacturer’s documentation for information about its support within storage hardware.

Layer No. 5: IPSec Encryption. All the layers of security to this point focus their energies on ensuring two devices are allowed to communicate. That’s the authentication process. None to this point give any attention to securing storage data as it travels through the network. Solving this problem requires encryption, which is another activity supported by IPSec. IPSec encryption represents the final of these five layers because it occurs only after a server’s initiator has been authenticated to a storage device’s target. Resulting traffic is then encrypted at the source, and then decrypted once delivered.

While encrypting traffic indeed provides the highest levels of security, doing so comes with a cost to performance. The activities in encrypting and decrypting simply require more processing, which itself can impact overall transfer speed. As a result, best practices today suggest that encrypting traffic via IPSec be limited to only untrusted networks, or in cases where extreme security is necessary.

iSCSI is indeed a fantastic protocol for routing servers to their storage. Easy to work with, simple to interconnect, and requiring a short learning curve for those familiar with TCP/IP’s fundamentals, iSCSI provides a great service for IT. Yet, beware its oft-forgotten dark side: Lacking the right layers of security, iSCSI could expose storage traffic in easily exploitable ways.

By: Greg Shields

Tuesday, December 6, 2011

Five business challenges every cloud reseller should prepare to tackle



The migration away from hardware and software sales to cloud services means traditional resellers must reevaluate their business models, staffing strategies and means of generating revenue. Partners that aren't accustomed to a services-oriented business model may find that becoming a cloud reseller requires a number of strategic changes. Thinking of entering the market? Check out these five business challenges every cloud reseller should be prepared to tackle:






1. Selecting the right cloud partner

Traditional value-added resellers (VARs) know that a high-quality product is only one factor in what makes a good vendor partner. The best piece of hardware or software matters little if the vendor's technical support is lacking or its financial health is on life support. The same is true for cloud partnerships.

The cloud services market is active with mergers, acquisitions and consolidation. If the cloud reseller is partnering with a cloud provider that becomes an acquisition target, what protects the cloud reseller? Will the partnership agreement still hold, and if so, how might it change after the merger?

Cloud resellers will likely find more willing partners in smaller cloud service providers, as opposed to larger cloud providers that often sell directly to the customer to bypass the reseller. In that sense, it is often safer for the cloud reseller to work with smaller, channel-friendly cloud providers that need the reseller for sales support. The risk, however, is that these smaller cloud providers are likely to fall prey to acquisition or simply buckle under competitive pressures.

2. Compliance and legal issues for cloud resellers

Another challenge facing cloud resellers is the legal issues surrounding a cloud service partnership. The customer may have regulatory and compliance requirements, such as e-discovery, which must be upheld during and for a certain time after the services contract.

Cloud resellers must determine whether their partnership agreement obliges them to assume a customer's regulatory and compliance requirements or if the onus falls on the cloud provider. The issue becomes more complicated when the parent provider fails to comply with the customer's legal requirements, possibly exposing the cloud reseller to legal actions. Cloud resellers should thoroughly review all cloud agreements to protect themselves from any liabilities.

3. Working with (and against) hardware/software vendor partners

Most cloud resellers will have existing partnerships with hardware and software vendors that in turn may be developing their own cloud services. This can create opportunity or conflict.

Working within existing partnerships -- that is, using the same partner for cloud services and hardware or software sales -- could strengthen the vendor/reseller relationship. Working in concert with the vendor/cloud provider also enables cloud resellers to develop creative packages of cloud-based and on-premises products.

That same partnership, however, may also cause friction between the cloud reseller and vendor/cloud provider. If the vendor partner is making a big push for its public cloud services, conflicts may arise when a customer prefers to tap the reseller for a more lucrative private cloud project for the enterprise. Additionally, cloud resellers that choose a different cloud provider may find themselves competing with their vendor-turned-provider partner.

A healthy cloud services resale business may create other conflicts for resellers as well. Anemic hardware and software sales could affect purchasing discounts and profit margins.

4. Staffing needs change to address new demands for cloud resellers

The move from traditional hardware/software sales into cloud services also shakes up the division of labor for many resellers. For cloud resellers, staff labor time will be less focused on initializing services -- a process that will take hours, not days. Implementation staff may be retained to work on integrating those cloud services with enterprise infrastructures, but that staff will likely see a diminishing amount of work with on-premises servers and storage systems as those devices get farmed out to the cloud.

If the cloud reseller can sustain its systems sales, the hardware and software implementation staff will remain. If the system sales decrease, the implementation staff can be retrained to support the cloud services. And as shorter implementation cycles free up personnel, cloud resellers will be able to support more customers at or near current staffing levels.

But if the sales of systems implementations or cloud services do not grow quickly enough, then the cloud reseller will be forced to begin layoffs. In that case, the first targets for downsizing may be hardware/ software installation and implementation staff, leaving the cloud reseller to grow its business via sales, service initialization support and help desk personnel.

5. Cloud services shake up old revenue models

The business models for cloud resellers must evolve to survive the growth of cloud services. Small- to medium-sized businesses (SMBs) want one bill and one provider, creating an opportunity for cloud resellers to create attractive packages of cloud services with other hardware and software purchases.

If the partner program includes a royalty, then the cloud reseller will wait some months or a year before it can recoup its sales cost for the services sold. This long-term window creates some risks for cloud resellers. If the customer cancels the service, if the parent provider goes out of business or if the enterprise goes under, then the cloud reseller may be shortchanged on any royalties. Traditional managed services will also be affected by cloud competition, potentially resulting in another revenue loss for channel pros.

Smaller traditional VARs will likely receive less attention and fewer discounts from vendors and distributors over time, leaving no room for them in the market and forcing their transformation into cloud resellers to compensate for the lost system sales.

By :Gary Audin

Food for thought. What is IT Consumerization?

IT consumerization (information technology consumerization)

IT consumerization is the blending of personal and business use of technology devices and applications.

In today's enterprise, the consumerization of IT is being pushed by a younger, more mobile workforce, who grew up with the Internet and are less inclined to draw a line between corporate and personal technology. Employees have good technology at home and they expect to be able to use it at work too. This blending of personal and business technology is having a significant impact on corporate IT departments, which traditionally issue and control the technology that employees use to do their jobs. Consequently, IT departments are faced with deciding how to protect their networks and manage technology that they perhaps did not procure or provision.

The label IT consumerization has been around since at least 2005 when Gartner Inc. pronounced consumerization "the most significant trend affecting IT in the next 10 years." Gartner traced the trend to the dot-com collapse, when enterprise IT budgets shrank and many IT vendors shifted focus to the potentially bigger consumer IT markets. The result has been a change in the way technology enters the marketplace. Instead of new technology flowing down from business to the consumer, as it did with the desktop computer, the flow has reversed and the consumer market often gets new technology before it enters the enterprise.

Risk vs. reward: Personal cloud storage services in the enterprise

Personal cloud storage services have benefits, but with data security concerns on the rise, organizations may want to consider alternatives that offer more control.

Internal, open source software such as SparkleShare and ownCloud offer some of the same synchronization capabilities and ease of use that have made Dropbox and other personal cloud storage services so popular. They also give enterprise IT departments more oversight regarding where and how users can store and access data.
User demand for personal cloud storage services is strong and growing, but every organization must decide which system of document sharing best suits its needs. Security, manageability and version control are all factors that IT decision-makers should consider before choosing a system.

Benefits of personal cloud storage services

Cloud and on-premise document management systems both hold a lot of promise. The biggest benefit of personal cloud storage services is that they’re easy to use. When a user saves files to his or her Dropbox folder, for example, those files are automatically synced in the cloud, where the user can access them from other devices and share them with others.
From an enterprise perspective, personal cloud storage services ensure that users are always working on the most up-to-date version of a document, and they offer support for a variety of platforms. In addition, a company doesn’t have to dedicate resources to implementing and supporting a document management system, and there aren’t many costs associated with planning, administration or equipment.

In-house document management systems also offer many benefits. SparkleShare, for instance, uses the Git version control system and lets IT administrators specify where users can store data. It only supports Linux, Mac OS X and Android now, but Windows and iOS versions are in the works.

Then there’s ownCloud, an open source project that lets organizations set up their own cloud storage services. Users can access the service from Mac, Linux and Windows desktops via WebDAV, an online collaboration service that offers many of the same features as Dropbox, including version control, file sharing, encryption and syncing.

Personal cloud storage security concerns

Neither personal cloud storage services nor in-house document management systems are a perfect fit, however. On-premise systems -- even using free, open source software -- bring planning, administration and equipment costs that organizations may not want to absorb. And personal cloud storage services eliminate the enterprise’s number-one priority when it comes to data: control.
With a cloud service, enterprise data is stored in a remote data center that no one in IT (or anywhere else in the organization) can access. Enterprise users just drop files into folders, and the inner workings remain a mystery, for the most part.
Earlier this year there was a security problem at Dropbox, when a programming error opened a four-hour window during which anyone could log on to any account with any password. There haven’t been many security breaches of this nature, but there could be, and that possibility raises concerns about control over corporate documents in personal cloud storage services. The fact that these services generally rely on simple sign-ins with a username and password, rather than more advanced authentication methods, can exacerbate the problem.

With all these challenges, it seems most enterprises would balk at using personal cloud storage services. But as the consumerization of IT grows, so too do users’ expectations that these services will be available. And the fact is, could services can help people do their jobs better, and they’re often more efficient than what’s available in-house. Organizations need to determine the service that maximizes data control and security while still providing the benefits of the cloud to both the enterprise and its users.

By: R.H. Sheldon

Friday, December 2, 2011

Storage without limits ~ The Aberdeen AberSAN ZXP

AberSAN ZXP (2U Head Unit / JBOD Storage Chassis) Expandable ZFS SAN Storage Subsystem
The AberSAN brings the simplification of a network attached storage (NAS) server to the SAN environment by combining Fibre Channel and iSCSI block level connections with multi-user network sharing. Aberdeen removes storage and partition size limitations to deliver ease of use and flexibility, while featuring more efficient cross platform collaborative editing and shared media storage for post-production, content creation and any business in need of cost effective, scalable storage.

Storage without limits
The Expanding Digital Universe and continuing explosive growth of disk based storage means that limitations to legacy storage systems are going to be critically important in the months and years to come.
A storage system built upon 32bit addressing is never going to be as flexible and scalable as one built upon a 128bit system. So AberSAN Z-Series, thanks in part to the use of ZFS, has a number of advantages vs. legacy and Linux or BSD based solutions. These include:
Unlimited snapshots. With the exponential increase in the amount of data stored, snapshots need to become more granular if they are going to be able to be restored in a timely manner. Legacy solutions are limited to up to 255 snapshots. By comparison, AberSAN Z-Series can accommodate 2 to the 48th snapshots. This means you can run hourly snapshots, for example, on all of your data which in turn means that you can restore in a timely manner and that the incremental amounts sent over the network can be smaller than daily updates.
Unlimited file system size. Again, thanks to the legacy nature of many existing solutions, each file system is often limited to a relatively small amount of data stored. There are no such limits when you use AberSAN Z-Series.

Scalability
With AberSAN Z-Series you can scale performance the same way that you scale performance of other applications by adding to hardware performance by increasing memory available or CPU performance or the number of targets available.

Multi-level data protection
AberSAN Z-Series protects your organization's data with features and capabilities ranging from the most granular, per transaction data integrity checks to higher level backup and disaster recovery capabilities.
Every read or write made by AberSAN Z-Series utilizes ZFS data integrity. To avoid accidental and silent data corruption ZFS provides end-to-end checksumming and transactional copy-on-write IO operations. These operations eliminate the 'write-holes' and silent data corruption that have plagued storage solutions that are not based on ZFS.

Inherent virtualization
AberSAN Z-Series is built from the ground up on the revolutionary file system ZFS which means that virtualization is at the core of AberSAN Z-Series. This virtualization enables thin provisioning and also improves performance via I/O pooling. This means that when you add more disks or systems to AberSAN Z-Series the overall solution accelerates. Another benefit of AberSAN Z-Series is that it can run within VMware or other virtualized environments.

AberSAN ZXP-Series OS Highlights:
  • Dual ported SAS drives and Expanders enable HA
  • Shared pools of storage from any combination of storage hardware
  • Unlimited snapshots and clones
  • Unlimited file size
  • Block and file based replication
  • End to end data integrity
  • Thin provisioning
  • Integrated search
  • Hybrid storage pools via automated use of SSDs
  • L2Aarc and ZIL Cache SSD Options for added IOPS
  • Virtualization management
  • Cloud ready storage capabilities
  • In-line Deduplication
  • iSCSI Target Capable
  • Multi JBOD expandable

Thursday, December 1, 2011

Does the cloud need an app store? CA says yes with cloud marketplace

Imagine a world with iPhones and iPads, but no App Store.

That's the state of the cloud ecosystemtoday, according to CA Technologies, which recently launched its Cloud Commons Marketplace -- a cloud marketplace and communal hub where cloud buyers, sellers and developers can advertise, test and buy services.

"People who are not in the community of developers don't realize this is a key thing that's missing," according to Stephen Hurford, director of cloud services at DNS Europe, a London-based cloud provider that sells its CA AppLogic-based Cloud Control Panel service, a cloud application delivery and management platform, in the cloud marketplace.

About 25 software vendors and cloud providers have joined Cloud Commons since launching in mid-November. Although most of the products currently sold are based on CA's AppLogic application delivery platform -- because most of the market's current products are sold by CA -- it is a platform-agnostic marketplace. Like other app store business models, cloud providers and ISVs pay CA a percentage of any sales made through the Cloud Commons Marketplace. That percentage varies according to the type of service and deployment model.

By selling their services through Cloud Commons, providers enable prospective customers to procure cloud services just the way they shop for consumer goods at an online retailer -- comparing user reviews, rating products, selecting a service, adding it to the shopping cart and proceeding to checkout to pay. But instead of buying books or airline tickets, customers rent an independent software vendor's(ISV's) application and select the service provider cloud environment where they want it to run.
"That's practically all of our marketing done for us -- and with a much bigger global [audience] than we could reach on our own," said Hurford, whose company primarily targets developers buildingSoftware as a Service (SaaS) and Platform as a Service (PaaS) products on AppLogic.
Cloud providers can also use CA's marketplace to do their own shopping and expand their portfolios by purchasing and pre-integrating cloud products from ISVs on Cloud Commons, as well as use cloud-based AppLogic appliances to convert traditional applications into cloud services, according to Andi Mann, vice president of strategic solutions at CA.

"[Cloud] providers have the ability to now be hosts for a range of software that is not [otherwise] available as a cloud offering," Mann said.

Aggregation model spells opportunity for cloud marketplaces

CA is hardly the only vendor to sponsor a cloud marketplace initiative. Equinix and Synnex launched cloud marketplaces earlier this fall.   VMware announced its intentions earlier this year to develop something akin to cloud marketplace, vCloud Datacenter Global Connect service, which would rely on cloud providers to partner with each other. Even NASA recently announced plans to build a cloud marketplace for the science community
.
Some of these projects, however, bear a closer resemblance to directories than marketplaces -- not actually selling anything, but instead prompting would-be buyers to submit a form with their contact information for the cloud provider's or ISV's sales representatives.

An open cloud marketplace that resembles a cloud aggregator model, such as Cloud Commons, will be a more rewarding opportunity for the cloud provider ecosystem, according to Steve Hilton, principal analyst at Boston-based Analysys Mason.

"It gives [cloud providers] an opportunity to sell a pre-integrated solution, which is something needed in the cloud world," Hilton said. "While cloud services are a large opportunity, the bundling of cloud services and other solutions like connectivity, security solutions and consulting [services] is an even better model."

Global cloud services revenue is expected to reach $40 billion by 2016, and about half of that (46%) will go to ISVs bypassing the channel to sell their own cloud services, Hilton said. Twenty-five percent of that revenue that will go to communications service providers, while 29% will go to partners, distributors and solution providers.

"The cloud marketplace gives sellers another channel-to-market for their cloud services that they wouldn't normally have," Hilton said.

Emphasizing that the Cloud Commons Marketplace is still in its infancy, DNS Europe's Hurford said he sees room for improvement in the cloud marketplace, adding that CA must shorten or streamline the certification process for cloud providers, and ISVs must rethink their pricing.

"A lot of vendors are setting their own prices for their products and -- perfectly honestly -- they're quite high, based on what we know the market is willing to pay," he said. "That's something [CA is] going to have to manage in the marketplace."

Developer's workbench adds to CA cloud marketplace

The cloud marketplace isn't the only expansion to Cloud Commons, which CA originally launched in 2010 as an online community for the cloud ecosystem. CA also opened the Developer Studio, which gives Cloud Commons members free access to project management tools and a cloud-based AppLogic test environment. Once tested and certified, the cloud service may then be loaded onto the cloud marketplace.

CA launched the Developer Studio to address the challenges that cloud providers and ISVs face when trying to rebuild applications and services for the cloud, Mann said.

"There's a lot of time and a lot of effort, and it shouldn't be that involved," he said. "We're a software vendor ourselves, and we see this every day with our own customers and our own solutions."

The ability to save, share and collaborate on programming templates will also be invaluable to smaller and midsize cloud providers, Hurford said. .

"Bringing a customer onto the cloud can be quite complex and can involve re-architecting or rebuilding a template from nothing," Hurford said. "Up until now, it was a case of rebuilding it [ourselves], as a service provider, or [getting] word of mouth contacts through CA salespeople or other partners in the community to see if anyone had done this before."

By: Jessica Scarpati

Balancing the Scales ~ Selecting the Right EHR for the Life of Your Practice and Your Patients

If we are at or approaching a technological tipping point in the history of healthcare, then it has never been more important for physician practices to select the right electronic health record (EHR) – and there are tangible reasons to believe so.
A recent survey of 400 providers by KLAS found that 35 percent are replacing existing systems, including one third of small practices within that number and 43 percent with 100 physicians or more ("
Ambulatory EMR: Win Rates, Replacements, and Provider Loyalty," Feb. 23).
The industry is regularly updated with financial analyses forecasting growth in the EHR market, such as the June, 2011 report from MarketsandMarkets expecting the U.S. EHR market to reach approximately $6 billion by 2015, up from about $2.2 billion in 2009.
At the same time, patients are increasingly becoming discerning consumers of healthcare and desiring more from technology, meaning they also will be seeking best practices. A recent Dell survey, for example, found 74 percent of patients share the expectation that EHRs should be able to link providers, healthcare institutions, labs and other facilities.
Taken together, as the implementation impact of the meaningful use initiative increasingly becomes evident, it is equally important to approach EHR selection as a starting point or a foundational aspect of long-term business strategy to navigate the future of healthcare and certainly accountable care, payment reform and new payer models yet to come.
So as meaningful use stages progress, PQRS and other quality reporting programs evolve, ICD10, HIPAA 5010 and accountable care take hold, there are foundational criteria for selecting the right EHR solution that can set the right course even before the research begins or an RFP is sent out the door.
At the outset:
• Bring all parties in your practice together into the discussion. Physicians, practice administrators, physician assistants, nurses, medical assistants, billers, schedulers and other staff important to your institution should be heard.
• Assess practice goals that you want to achieve by adopting an EHR:
- Meaningful use incentives capture?
- Improved internal workflows and practice efficiency?
- Improved patient communication, engagement and satisfaction rates?
- The ability to better exchange data with referring physicians and other health facilities?
- The ability to exchange data with immunization registries and public health agencies?
- The ability to participate in clinical trials and research?
- A system that integrates clinical, financial and administrative tasks for operational simplicity?
- A premise-based or hosted solution?
- How best to position the practice for the future of accountable care and reimbursement model variances?
- All of the above?
It’s also important to ask yourself questions before they are asked of you. For example, do you have an equipped, in-house IT staff or do you require increased levels of support by your EHR solution provider? If you are replacing a system as many are, was it due to insufficient technology – the interfacing of legacy EMR and practice management systems, for example – or that the right system for your needs was not fully investigated on the front end during your last purchase?
Once you have an internal game plan underway, it’s time to start the external process:
• Seek site visits and references only from practices where more than 70 percent of care providers use the EHR solution daily.
• And of those EHRs that are in use widely, make sure they are being used at the point of care with patients.
• Seek like practices, specialties and workflows using the solution in the above manner.
• In accordance with specialty considerations, seek a solution’s ability for customization.
• Also during site visits, seek demonstrations where the exact software and version you are considering is in use.
• Usability is extremely important, so ensure you look for a system that can be flexible to your specific workflows.
• Examine the EHR provider’s long-term business plan, ensuring that a five-year or more outlook is in place providing a strategy and vision that is inclusive of your own growth and technological goals.
• Review independent assessments of EHR providers and technology in the areas of training, installation, go-live monitoring and support, service and certification:
- KLAS Research. The above-noted KLAS conducts a range of customer-driven evaluations and awards based on more than two dozen criteria such as sales and contracting, implementation and training, functionality and upgrades, service and support, again as ranked by healthcare providers and administrators.

- Certification Commission for Healthcare Information Technology
www.cchit.org. A long-time independent certification entity in comprehensive and specialty functionality, which also evaluates usability, CCHIT is also a certification entity specific to EHR meaningful use criteria.
- Industry membership organizations such as the Medical Group Management Association (
www.mgma.com) and the Health Information Management Systems Society (www.himss.org) also provide independent assessments in selection priorities and recommendations.
If the meaningful use incentives program and its provisions for up to $44,000 per eligible provider in the Medicare pathway and up to $63,750 in the Medicaid pathway is a motivating factor, then also begin with some foundational review:
• Assign a meaningful use assessment leader within your practice or facility.
• Make sure the EHRs you are researching are certified for meaningful use Stage 1.
• Use the core and menu criteria from Stage 1 as a checklist for a system’s functionality.
• Ensure that the meaningful use Final Rule data exchange language standards of CCD and CCR data are in place.
• Critically assess the knowledge base of the EHR provider as to future meaningful use readiness for Stages 2 & 3.
• Seek an EHR solution that includes a Meaningful Use Dashboard to easily track allowable and quality measures met.
• Ensure the product you are being demonstrated during the Sales process is the exact, Certified product you will be purchasing and installing.
Keep in mind there is still time to take advantage of the program. You can start quality measure reporting and attestation by Oct. 1, 2012 and still receive maximum incentives in the Medicare pathway, and by 2016 to maintain maximum funding in the Medicaid pathway, if you start the process today. I do not recommend you wait anywhere near that long to implement a Certified EHR because of unforeseen hurdles that occur in life that could cause you to miss your maximum incentive allotment but that timeline is good to know.
And there are tangible reasons to find confidence in the long-term availability of incentives, which are drawn from the Medicare Trust Funds held by the U.S. Treasury, and are therefore not subject to annual Congressional budget appropriations. However, with the debt ceiling debate, and as our wise forefathers and foremothers taught us, gather what is yours today as you never know what tomorrow holds. The EHR adoption incentives are flowing today and they are all front-loaded so you can achieve well up to half or more of the total incentive allotment in just the first two years.

But whatever your motivation for implementing an EHR, the process can and arguably should be time consuming, but does not have to be intimidating. On your side is a wealth of EHR adoptions available for clinical, workflow, usability and ROI outcomes evaluation.
Once you make the right EHR selection for your practice, you will realize returns well beyond the incentives, and will be providing your patients with the most advanced care possible while helping to create a smarter, more sustainable healthcare system in America and globally.
This tipping point is not a tripping point, but by definition a point at which what is previously rare becomes common, and therefore an opportunity to balance the scales of practice and patient needs.
By: Justin Barnes is chairman emeritus of the national Electronic Health Record Association (EHR Association), and is vice president of marketing, industry affairs and government affairs for Greenway Medical Technologies, Inc.