Tuesday, November 29, 2011

VPLS: A secure LAN cloud solution for some, not all

VPLS(virtual private LAN service) is one of the most recent buzzwords to enter the service-provider acronym world, and some vendor marketing departments are touting it as the latest VPN panacea. Not surprisingly, some service providers believe the hype and are now offering VPLS in environments where it could do much more harm than good.

Security experts have already realized the "opportunities" (read: attack vectors) offered by an enterprise-wide LAN cloud and demonstrated practical VPLS-based attacks. Demonstrations of these VPLS-based attacks can be seen on slides 23 to 31 in the All your packets belong to us presentation given at ShmooCon 2009. In addition to security threats, it's vital to understand the advantages, limitations and threats of VPLS in order to offer a range of secure services matching your customers' expectations.

The evolution of VPLS from previous networking technologies
Before addressing how service providers can offer secure VPLS solutions, it's important to know how VPLS developed. When the emerging service provider networking vendors tried to replace "old-world" technologies like (frame relay and ATM) with "new-world" IP, they focused on IP-based virtual private networks (VPNs), which were successfully implemented with MPLSVPNtechnology.

But MPLS VPN technology did not fit all the needs of incumbent service providers, which had to transport legacy traffic, such as ATM-based video surveillance, across their infrastructure. Early adopters also discovered that even though IP was ubiquitous at the time when MPLS VPN technology was introduced, large enterprises still had to support small but significant amounts of non-IP traffic. Even worse, some IP-based applications (including server clustering in disaster-recovery solutions) required transparent LAN communication.

Networking vendors tried to cover all service provider needs and introduced technologies that enabled point-to-point transport of any traffic across the service provider infrastructure, including AToM(Any Transport over MPLS) and L2TPv3 (Layer 2 Tunneling Protocol version 3). These point-to-point offerings allowed service providers to create pseudowirescarrying Ethernet, ATM or frame relay data across their MPLS or IP infrastructure, addressing the legacy needs of enterprise customers. With all the building blocks in place, it wasn't long before someone tried to replicate the Local Area Network Emulation (LANE) idea from the ATM world and build a technology that would dynamically create MPLS pseudowires to offer any-to-any bridged LAN service -- and VPLS was born.

VPLS lacks layer 3 security features
VPLS is a technology that provides any-to-any bridged Ethernet transport among several customer sites across a service provider infrastructure.

All sites on the same VPN are connected to the VPLS service and belong to the same LAN bridging domain. Frames sent by workstations attached to the site LANs are forwarded according to IEEE 802.1 bridging standards. VPLS offers none of the layer 3 security or isolation features offered by layer 3 VPN technologies, including MPLS VPN and IPSec.

VPLS layer 2 switching problems
The networking industry made numerous attempts to implement layer 2 switching -- previously known as bridging -- across lower-speed WAN networks. All of these attempts, including WAN bridges, bridge routers (WAN bridges with limited routing functionality called b routers) and ATM-based LANE, have failed because of the inherent limitations of bridging. As I wrote in the article "Making the case for Layer 2 and Layer 3 VPNs," "the world is not flat, and Layer 2 services cannot cover the needs of an entire network."

A layer 2 end-to-end solution (including VPLS) has to permit every workstation to communicate with every other workstation in the extended LAN or send Ethernet packets to allworkstations connected to the same bridging domain. VPLS thus provides no inter-site isolation:
  • A single workstation can saturate the WAN links of all sites connected to the VPLS service.
  • An intruder gaining access to a workstation on one site can try layer 2 penetration techniques on all workstations and servers connected to the VPLS cloud.
  • VPLS-based services cannot implement traffic filters, as these filters would violate the "transparent LAN" principle.
With these threats in mind, it's easy to see that you should offer VPLS services only to the customers actually requiring multi-site transparent LAN solutions, not to everyone asking about a simple VPN service.

Which customers need VPLS?
If your customer has applications that use non-IP protocols (including legacy Microsoft or AppleTalk networks), VPLS is the best alternative, as long as the customer understands its security implications. To implement a secure solution on top of a VPLS backbone, each customer site should use a router to connect to the VPLS backbone. A managed router service will achieve the maximum value-add, if the customer will go that route.

VPLS is also a perfect fit for disaster recovery scenarios, where you need to create an impression that servers located at different sites belong to the same LAN.

VPLS: Not appropriate for all customers
When a customer with insufficient IT knowledge approaches your sales team asking for a VPN solution linking numerous remote sites, VPLS might not be the best solution, and he probably needs a more scalable MPLS VPN solution. Implementing VPLS would be faster and easier (more so since the customer is not networking-savvy), but after the first major incident -- and it will happen eventually -- you'll be faced with an extremely unhappy customer and a tarnished reputation.

By: Ivan Pepelnjak

Wednesday, November 23, 2011

12 Awesome Server Admin Apps for Windows Phone 7

1. Pingdom Pulse (Free) provides access to your free or paid Pingdom account, a hosted third-party server monitoring service. You can view a summary of all your Pingdom checks, including current status and response time. You can also see a summary of the past 30 days performance for each check. Additionally, you can run manual checks on any http server.

2. Mobile Server Stats ($1.99 after trial) provides remote real-time monitoring stats of a Windows Server or PC when its free server component is installed on the computer. Get standard statistics (e.g., on system, CPU, drives, processors, services, running processes, users and groups) and add custom WMI queries. It also includes simple HTTP server monitoring. You can view real-time statistics or cached polls stored on the computer.

3. Network Tools (Free ad-based or $2.99) uses a remote server to run pings, TCP port connection tests and HTTP/HTTPS connection tests. It also provides a graphical display of their ping, port 80 and HTTP response statuses. Remember, a remote server is used, so it can't reach local resources; they must be accessible via the Internet.

4. Wake My PC (Free) lets you remotely boot up computers via the Wake-On-LAN (WOL) protocol. This is especially useful if you must remotely access files or connect via remote desktop. You must configure your compatible computer (in the BIOS) and network to use WOL. Then, simply enter in the computer's MAC address and Internet IP info.

5. Mobile DDNS ($0.99) is a DDNS client for your phone to update DDNS providers: DynDNS, NameCheap and ZoneEdit. This is great if you must connect to your phone via the Internet. You don't have to find and track your public IP. Just use the host name from a DDNS provider, and it will always point to your phone.

6. Cool Remote (Free) is used to remotely connect to and control a Windows (XP/2003/Vista/2008) machine running its free server application from your phone or any other computer via the web browser. It features full PC keyboard support (including ctrl, alt, shift, tab, esc, win, fn, home and end) and multi-monitor support. You can input connection details or scan the local network to find the PC.

7. RemoteDesktop ($12.99 after trial) is a full RDP Remote Desktop client for connecting to Professional editions (those that support including connections) of Windows XP, Vista, 7 and 2003 Server. It natively connects via RDP, and it doesn't require an extra server component, unless the computer isn't available on the Internet. If connecting locally, you can use the server component. RemoteDesktop supports standard security or Network Level Authentication (NLA). Multiple resolutions and pinch zoom are also supported.

8. SimpleVNC ($0.99 after trial) is another remote desktop client, but it supports the platform-independent VNC protocol. You can use it to connect to and control a Windows, Mac or Linux machine. The computer just has to have a VNC server (such as the free TightVNC or UltraVNC) installed.

9. Mobile SQL ($5.99) can connect to a Microsoft SQL server or Oracle MySQL database. You can then run queries and updates directly from your phone. It's a simple application that is great for performing occasional or emergency maintenance tasks.

10. VPSinn (Free) lets you access, manage and monitor your virtual environment. It's compatible with the Citrix XenServer and Xen Cloud Platform (XCP). You can check the status of your virtual machines, view their properties, and change states (e.g., start, stop, suspend, reboot and live migration).

11. My FTP ($1.29) is an FTP client with DropBox support. Connect to Internet-accessible FTP or NAS servers or your DropBox account to access, upload, download and manage your files. Its integrated file support lets you see images, play movies, listen to music and open text files.
12. SubCalc Subnet Calculator (Free) helps you calculate the network address, wildcard mask, subnet mask and other IP info. This is great when deploying a new network or configuring existing network components. You can input an IP address, define the number of hosts needed and the prefix length. It will then display the subnet and wildcard mask and broadcast address.
Search and find more apps with or without a Windows Phone by downloading Microsoft's Zune program, or try third-party directories, such as WindowsPhoneApplist.com or FreewarePocketPC.net.

By Eric Geier

25 "Worst Passwords" of 2011 Revealed

If you see your password below, STOP!
Do not finish reading this post and immediately go change your password -- before you forget. You will probably make changes in several places since passwords tend to be reused for multiple accounts.
Here are two lists, the first compiled by SplashData:

1. password
2. 123456
3.12345678
4. qwerty
5. abc123
6. monkey
7. 1234567
8. letmein
9. trustno1
10. dragon
11. baseball
12. 111111
13. iloveyou
14. master
15. sunshine
16. ashley
17. bailey
18. passwOrd
19. shadow
20. 123123
21. 654321
22. superman
23. qazwsx
24. michael
25. football

Last year, Imperva looked at 32 million passwords stolen from RockYou, a hacked website, and released its own Top 10 "worst" list:

1. 123456
2. 12345
3. 123456789
4. Password
5. iloveyou
6. princess
7. rockyou
8. 1234567
9. 12345678
10. abc123

If you've gotten this far and don't see any of your passwords, that's good news. But, note that complex passwords combining letters and numbers, such as passw0rd (with the "o" replaced by a zero) are starting to get onto the 2011 list. abc123 is a mixed password that showed up on both lists.

Last year, Imperva provided a list of password best practices, created by NASA to help its users protect their rocket science, they include:

It should contain at least eight characters

It should contain a mix of four different types of characters - upper case letters, lower case letters, numbers, and special characters such as !@#$%^&*,;" If there is only one letter or special character, it should not be either the first or last character in the password.

It should not be a name, a slang word, or any word in the dictionary. It should not include any part of your name or your e-mail address.

Following that advice, of course, means you'll create a password that will be impossible, unless you try a trick credited to security guru Bruce Schneir: Turn a sentence into a password.

For example, "Now I lay me down to sleep" might become nilmDOWN2s, a 10-character password that won't be found in any dictionary.

Can't remember that password? Schneir says it's OK to write it down and put it in your wallet, or better yet keep a hint in your wallet. Just don't also include a list of the sites and services that password works with. Try to use a different password on every service, but if you can't do that, at least develop a set of passwords that you use at different sites.

Someday, we will use authentication schemes, perhaps biometrics, that don't require so much jumping through hoops to protect our data. But, in the meantime, passwords are all most of us have, so they ought to be strong enough to do the job.

By David Coursey

Monday, November 21, 2011

A costly ripple from Thai floods

A bad dream is playing out in Thailand and turning into an unpleasant and costly reality for Midlands computer retailers and repair shops.

The Southeast Asian country, which is grappling with the effects of devastating flooding and evacuations from its largest cities, is home to many of the world's largest computer hard disk drive manufacturers, including Western Digital, Seagate Technology and Toshiba Corp.

Over the last three months, floodwaters have crippled hard drive manufacturing facilities, bringing hard disk drive and component production to little more than a trickle. Hard disk drives are about the size of a small book and serve as the main storage components in computers, holding information about the machine's operating system and other software.

Analysts and computer store owners who serve Nebraska and Iowa say the shortfall of hard drives is going to take its toll on the holiday shopping season and could last well into 2012. The issue could mean disappointment or lighter wallets for consumers planning to give certain electronic gifts.
 "It's a nightmare scenario for us," said Thor Schrock, owner and CEO of Schrock Innovations, a computer retailer and repair shop with locations in Omaha, Papillion and Lincoln. "And it's only going to get worse."

At the beginning of November, Schrock said, hard disk drives already were becoming so expensive at wholesale prices that he spent $27,000 at Office Max and bought as many hard drives as he could.

"We cleared out three of their warehouses," Schrock said. "In order to get to the end of November, we needed more hard disk drives."

Hard drive prices have doubled since mid-October, Schrock said.

He said there may not even be enough available hard drives to maintain a predictable inventory, especially of low-priced netbooks.

"As a computer retailer, this is our big time of the year, and if we can't get hard drives, we can't sell computers," Schrock said. "And if we can't sell computers, we can't service computer parts, either."  Fang Zhang, an iSuppli analyst, wrote in a research note that Western Digital will likely be knocked from the No. 1 perch of hard disk drive producers because of the flooding. Zhang also predicts that the shortage will become more serious at the end of 2011 and pour into 2012, at least through the first quarter.

During that time, shipments are expected to decrease 30 percent, causing prices to soar, Zhang said.  Darby Deeds, owner of Computers To Go in Bellevue, said in mid-October he made an order for 500-gigabyte hard drives, which are used to build and repair computers, for $40 apiece. As of Friday, the same drives carried a price tag of $99.

Across the board, consumers buying computers as gifts this holiday season should expect that to translate into at least a 10 percent price hike on all computer purchases, he said.

With that inflation sending the prices of PCs upward, Deeds said he has noticed that his sales to business clients, which include some sales to Offutt Air Force Base, haven't slowed. He expects that to continue through the shortfall.

"Their reaction has been — 'We've got to have it,'" Deeds said. "They're not changing anything."  General retail consumers, however, are exhibiting more patience, Deeds said.

That spells bad news for Nebraska Furniture Mart, one of the area's largest electronics retailers.  Bob Batt, executive vice president at the Berkshire Hathaway-owned Furniture Mart, said the disaster in Thailand reminds him of the catastrophic earthquakes and tsunami that blasted Japan and disrupted electronics supply chains earlier this year.

"We're not expecting good things," Batt said. "We're monitoring the situation and will take the appropriate action. It's that kind of a deal. Time will tell."

Computer companies that have been pinched by weak demand in the United States and Europe are bracing for the hard drive shortage to compound those issues.

Analysts have said that the drives that are available will first go to companies with top priority, like Dell and Hewlett-Packard, for example.

Still, those producers remain concerned. In a statement, Dell said that it anticipates hitting the low end of its fourth-quarter revenue projections because of the disaster in Thailand.

The industry prioritization, however, means companies like Acer and Asustek, both of which make cheaper netbooks and desktops for consumers with a tighter budget, are more likely to get shorted in the supply chain.

"The situation is severe, but the exact timetable for a full recovery is unknown at the current time," Acer said in a statement. "There will definitely be price hikes for HDD storage, but the magnitude is uncertain for now and would most likely be transferred to end users."

Apple officials have said they expect an industrywide shortfall, but aren't sure how things might play out for their company.

"Like many others, we source many components from Thailand," Apple's chief executive Tim Cook said during a conference call with analysts and journalists. "It is something that I'm concerned about. How it affects Apple, I'm not sure."

The devices that could get a boost from the Thai disaster are laptops and devices that run on solid state, or flash, memory. Most mobile devices and tablets run on solid state memory, which doesn't have moving parts like a hard disk drive, and an increasing number of laptops, including Apple's Macbook Air, use solid state because it's typically quicker and smaller.

Schrock said that although tablets like the Apple iPad are already popular, the Thai floods could push more consumers in that direction rather than spending more on overpriced PCs.

For now, though, those affected by the supply chain disruptions aren't throwing in the towel on this holiday shopping season.

"It's not pretty, but we just don't know what's going to happen," Batt said. "Bad news comes along from time to time, and this is one of those times."

By Ross Boettcher
WORLD-HERALD STAFF WRITER

Tuesday, November 15, 2011

Seven Technology Predictions for 2012

2011 has been a remarkable year, what with the extraordinary weather patterns, more than the usual worldwide political turmoil, and national apprehension over the delicate state of our economy. During times like these, people tend to increase their need to communicate, to give and get information, to reach out to their friends, family, community, and the world. It's all about information flow. As 2012 draws near, my predications for technology are all about communication and the information stream. Technology in 2012 and beyond will significantly shift and enhance the exchange of information for people and for enterprise organizations.

1. Corporations adopt social networking as a primary communication tool.
No longer just for the younger crowd, the impact of social media will continue to increase. The act of social networking and its corresponding sites is comparable to the introduction of email in the business environment a decade or so ago. Prior to email -- the phone was king (and the type of phone and access lines - but I don't want to date myself that much). It is a serious mistake to underestimate the power of social networks (whether it is Tumblr, Chatter, Yammer, Twitter, and IMs - though some might regard IM's as "old school"). It is truly amazing just how quickly a ground swell can be raised over a social event. Just look at how rapidly the November 5th, Bank Transfer Day was organized - or the mounting support for the Occupy Wall Street movements. Far beyond keeping in touch with family and friends, social networks have been influential in organizing popular social and music events, exposing on-the-scene political riots, and helping release people who have been incarcerated overseas. Social networks have tremendous power and influence - far more than most people realize (and probably want to accept either). The "traditional" workplace has quickly changed - more people are working from home or remotely and want and even need this contact. This does not mean the real-time interaction of the bricks-and-mortar work place environment have been lost, just changed.

Business executives, employees and home users keep in contact through Facebook or other social sites, blurring the line between work and social boundaries. In many cases, these social networking sites are being used to share insights, news, results, and other information that would normally be on a bulletin board or mass e-mail. Many companies today are actively pursuing social networking collaboration technologies to further their communication reach at much reduced costs. In some cases, you may receive a discount by "Liking" a business.

The use of Microsoft's Lync or Office 365 will enable users to have both business and personal contacts in one IM interface. Business and IT leaders will have to learn to use these to accelerate the business-decision process and maintain relevance with workers. Customers will expect immediate answers to questions, and employees can accomplish more through these communications. Social networking will become one of the main, if not THE primary means of communications in many corporate environments.

2. Death of the laptop?

No, the laptop will not disappear next year or even the year after - but the decline of the ubiquitous laptop, especially in the business environment, will accelerate in 2012. The laptop will never truly disappear, but for many business users, a tablet will more than suffice. After all, the majority of the work done on business laptops is accessing and reading email, using business applications and playing Angry birds (not by all, of course) and not necessarily in this order. Laptop usage will diminish as the capabilities and accepted presence of Apple's iPad, Amazon's Fire and other such devices increase. This will hold true for other "smart" devices. Today you can control many household appliances and services through your smart device - even going as far as locking your car. You can use your smart device as a virtual wallet, it can serve as your boarding pass for aircraft - who knows, and maybe the smart device may spell the end of our wallets as well!

3. The "To the Cloud" movement continues.

"To the Cloud" - That is going to be THE mantra this year and will certainly be more pervasive and louder in the years to come. The cloud has become synonymous with almost any service/server that is no longer maintained on-premise in your organization. The advantage of Cloud solutions are many - reduced infrastructure costs, ease of growth and providing a consistent experience for local and remote users. The advantage is that this is done as an alternative to hosting and maintaining your own servers and application software. In cloud computing, businesses pay for only the resources that they consume. Businesses that host services and applications in the cloud improve overall computer utilization rates, as servers are running at or near full capacity from clients connecting remotely.

4. The need for Virtualization skills will grow exponentially.

Virtualization means moving multiple physical servers to a virtual machine environment. Virtualization vendors such as Citrix, VMware, and Microsoft are making it possible for companies to improve the efficiency and availability of IT resources and applications. Virtualization is being adopted by companies of all sizes as a means to reduce costs through consolidation of servers and lower cooling requirements. Application Virtualization has become very popular with businesses. Having the skill set to deploy applications that connect securely through a browser is critical for companies that have numerous offices.

This is one area that is going to be very interesting to watch in 2012 - especially due to the dynamics among Cisco, Microsoft, and VMware. There is no doubt that the demand for skills in this arena will grow exponentially.

5. The days of owning software are numbered.

You don't need to look too hard to see that SaaS (Software as a Service) is the wave of the future. Just look at the model used by Blizzard and other game companies. You buy the game and then pay a monthly fee for the privilege of playing the game online as well. Now, carry this forward to the major software vendors. They must be dreaming of the revenue stream when customers no longer just buy the software, but pay a monthly access fee. From their perspective, it would help reduce software piracy, no longer could you sell your old software; and did I mention the revenue stream? There may be an advantage from a user perspective - they may be able to use this model for a short term project. Instead of not buying a software package - they may be able to rent it for a period of time. Now look at the cloud and Office 365 - with Office Web Applications, aren't you in effect "renting" access to the software (and other services as well) for a period of time?

6. Real bandwidth to the household.

A New York Times report ranked the United States 26th in the world when it comes to internet access speed. According to a report from Pando networks, the US had an average of 4.93 Mbps speed to the household. In contrast, South Korea (#1) has an average of 17.62, Romania (#2) has an average of 15.27 Mbps and Bulgaria (#3) had an average of 12.89 Mbps. As an example - Finland passed a law that entitles every person to a 1 Mbps connection (supposed to rise to 100 Mbps by 2015). The US is also increasing bandwidth available. This must be addressed if the US is to continue to compete. In fact, the average peak connection speed in the United States increased 95 percent from the first quarter of 2008 to the first quarter of 2011. Internet speed and broadband availability will increase significantly next year.

7. The rise of streaming media.

What, pray tell, do you mean by streaming? Netflix had the right idea in streaming movies to the home. Now think about for this for other items as well. Streaming of TV to smart devices - you can watch your favorite show on the commute home. In those areas where cable is either not available (or does not provide the content at a cost that is acceptable), satellite TV and radio have made huge inroads. Gone are the days of the monster dishes that could have been an escapee from a bad Sci-Fi movie. Now we have small dishes, similar to ones found throughout the world. The satellite is streaming the content to our TVs, computers, and other devices (including refrigerators). Now let us add cell phone (well, let us be honest and just call them smart phones). The number of new landlines is diminishing as the number of smart devices is increasing. Why have a landline and answering machine when you can have a smart phone, voice mail, and Skype with you all the time? So long as there is wireless access, we can use our smart devices and computers (even at 35,000 feet).

References

1. http://gcn.com/articles/2011/07/29/fastest-broadband-cities.aspx

Randy Muller ~ November 2011

AMD Bulldozer Chip Wants To Flatten Intel

Bulldozer Opteron 6200 processor, first x86 chip to use up to 16 cores, aims to help servers host many virtual machines while consuming less power per core.
AMD launched its Bulldozer Opteron 6200 processor Monday, the first x86 chip to be manufactured with up to 16 cores. The previous Opteron 6100 maxed out at 12 cores.

The more cores, the better the chip tends to be at hosting virtual machines. Bulldozer represents a redesign of the original, eight-year-old chip; the new Opteron 6200 has been optimized for both virtualized workloads, with a low-power variant, the Opteron 4200EE, also optimized for lower power consumption and reduced-cost cloud operations.

At the same time, it's something of a gamble for AMD, which needs a winner in the server chip market to revive its languishing fortunes. In launching Bulldozer, AMD is unveiling a new micro architecture that alters the definition of a processor core.
The original Opteron took market share from Intel's Xeon series, with server designers like Sun Microsystems' Andy Bechtolsheim snapping it up to produce new x86 servers. The Bulldozer Opteron is a complete makeover of the chip in an attempt to gain back what has become an eroding AMD share of the chip market.

"They're real workhorses, capable of running lots of virtual machines," said Margaret Lewis, director of server software for AMD, in an interview. That makes Bulldozer a candidate for cloud service providers as well as enterprise data centers that wish to maximize workloads per virtualized host, she said. For example, a single rack full of 16-core 6200s can host 672 virtual machines, with each VM having its own core, Lewis said.

At the same time, the Bulldozer architecture represents a rearrangement of chip components on the surface of the chip die. Lewis called Bulldozer's combination of two eight-core dies a chip "module." What's new is that each module represents two integer units and a single floating point unit. Under an Intel definition of cores, each core would have its own integer and a floating point unit. AMD has reshuffled the deck and announced that two cores have two integer units and a single, shared floating point unit. This makes sense since in virtualized server operations, where the floating point unit is typically used less than half the time, compared to integer operations.

Under certain circumstances, critics say this approach will hurt single-threaded performance involving floating point operations, such as in a scientific application on an individual desktop. That doesn't appear to be AMD's concern as it aims for the virtualized enterprise server and cloud data center markets.

"The integer units can work faster because they share things that make sense to share," said Lewis. That includes the Level 1 and Level 2 caches as well as the floating point unit.

That's a good idea if you're looking to increase real estate on a single chip for the parts of the processor that speed up handling the threads launched by many independent virtual machines or many cloud workloads.

AMD's Michael Detwiler, server product marketing manager, told Ars Technica that the chip adds 25% to 35% more processing power over the Opteron 6100. AMD's announcement also claimed the chip was 84% more powerful than the Intel Xeon 5600. But that's comparing the latest generation AMD chip to an earlier Intel chip with only six cores, Ars Technica warned.

The low-power variant of Bulldozer, the 4200EE, consumes just 35 watts, spread across eight cores or about 4.4 watts per core. The lowest power Intel x86 chip, AMD said in a footnote to its announcement, is the L5630 Xeon, which spreads 40 watts across four cores or 10 watts per core. (Intel's low power Atom chip is not part of the x86 family.)

In addition, Lewis said the Bulldozer chips "have knobs to run to control server power consumption." If a core goes to an idle state, it can be shut down until it's needed again, she said. And some cores can be run at full throttle or maximum power for which they were designed, while others run in power conserving mode.

Critics point out that elements of AMD's new architecture, such as its intentionally lengthened data pipeline, resemble the Pentium Pro 4 format, which turned into a spectacular Intel failure. But AMD's Lewis prefers to say that AMD has put together a combination of tradeoffs for today's server market. It is willing to take some knocks for circuit design if the Bulldozer modules yield more virtual machines served using less power. That's a combination could play well with some purchasers in today's market.

An October survey by InformationWeek on server purchasing found x86 servers were more in vogue than ever, with concerns about the heat they generated as more units were added to the data center.

by Tim Fischer

For a fuller discussion of how AMD is changing, or blurring, the definition of what constitutes a CPU core, see David Kanter's AMD's Bulldozer Microarchitecture.

Best Graphics Cards For The Money: November 2011

In this month's update, we discuss several price adjustments that impact our recommendations. We also look into the crystal ball and suggest that there may not be another graphics launch in 2011 as a result of several different factors.

Detailed graphics card specifications and reviews are great—that is, if you have the time to do the research. But at the end of the day, what a gamer needs is the best graphics card within a certain budget.

So, if you don’t have the time to research the benchmarks, or if you don’t feel confident enough in your ability to pick the right card, then fear not. We at Tom’s Hardware have come to your aid with a simple list of the best gaming cards offered for the money.

November Updates:

The news this month centers on price adjustments. While none of the changes are game-changing, they do alter our recommendations to some degree. For example, the average price on AMD Radeon HD 6790, 6850, and 6870 graphics cards is up about ten dollars per board. This enables a tie between the Radeon HD 6870 and the GeForce GTX 560. The Radeon HD 5570 also went up a few bucks, and is now priced too close to the superior Radeon HD 5670 to keep its recommendation. On the other hand, the Radeon HD 6950 2 GB is a few dollars cheaper. Nvidia's GeForce GTX 560 Ti is down a bit as well, and now shares a recommendation with the Radeon HD 6950 1 GB.

On a side note, for buyers interested in a great deal on a budget gaming card, we noticed that PNY's GeForce GT 240 GDDR5 is on sale for $40 at Newegg with free shipping. This card isn't quite as powerful as the $70 Radeon HD 5670. At $40, it's a steal, though.
It also looks like AMD's Radeon HD 5750/5770 cards are being phased out in favor of the Radeon HD 6750/6770. Since the 6700 series is essentially equivalent to the 5700 series with added Blu-ray 3D support and comparable pricing, the 5700 series won't be missed. Just don't mistake the 6700 cards for upgrades.

As for other news on the video card front, unfortunately, we don't have our fingers crossed for new graphics architectures in the last couple months of the year. There are a few reasons for this, but three stand out most prominently. First, the current generation of cards is more than capable of handling today's most demanding games (especially with companies like id delivering low-spec console ports like Rage). It's a good thing developers like Dice can still demonstrate the PC's place in gaming with titles like Battlefield 3 (see Battlefield 3 Performance: 30+ Graphics Cards, Benchmarked if you missed it two weeks ago). Second, with no major update to DirectX being discussed, there's no new API to drive interest in new graphics hardware. Third, next-gen products like AMD's Radeon HD 7000 series and Nvidia's Kepler are still subject to manufacturing kinks in TSMC's 28 nm node, and it looks like it will take some time to get the bugs out of that process. At this point we wouldn't be surprised to see some new productsmanufactured using 40 nm lithography, and we've heard rumors to support that. There's nothing concrete to report yet, though, and, for the first time in a long time, it looks as if we won't have any major announcements to take us through the end of the year.

Some Notes About Our Recommendations

A few simple guidelines to keep in mind when reading this list:
  • This list is for gamers who want to get the most for their money. If you don’t play games, then the cards on this list are more expensive than what you really need. We've added a reference page at the end of the column covering integrated graphics processors, which is likely more apropos.
  •  
  • The criteria to get on this list are strictly price/performance. We acknowledge that recommendations for multiple video cards, such as two Radeon cards in CrossFire mode or two GeForce cards in SLI, typically require a motherboard that supports CrossFire or SLI and a chassis with more space to install multiple graphics cards. They also require a beefier power supply compared to what a single card needs, and will almost certainly produce more heat than a single card. Keep these factors in mind when making your purchasing decision. In most cases, if we have recommended a multiple-card solution, we try to recommend a single-card honorable mention at a comparable price point for those who find multi-card setups undesirable.
  •  
  • Prices and availability change on a daily basis. We can’t base our decisions on always-changing pricing information, but we can list some good cards that you probably won’t regret buying at the price ranges we suggest, along with real-time prices from our PriceGrabber engine, for your reference.
  •  
  • The list is based on some of the best U.S. prices from online retailers. In other countries or at retail stores, your mileage will most certainly vary.
  •  
  • These are new card prices. No used or open-box cards are in the list; they might represent a good deal, but it’s outside the scope of what we’re trying to do.
  •  
  • 12:00 AM - October 28, 2011 by Chris Angelini

Forty-two PCIe lanes give the 990FX a clear connectivity lead over competing Intel chipsets.

Five $160 To $240 990FX-Based Socket AM3+ Motherboards          
     
Forty-two PCIe lanes give the 990FX a clear connectivity lead over competing Intel chipsets. We compare five class-leading products using AMD's FX-8150 to see which offers the best combination of performance, overclocking, integrated features, and value.

When it comes to the popularity of our stories, CPUs run second only to new graphics cards (which seem to get everyone's blood pumping the fastest). Motherboards fall behind quite a ways. That's a shame though, because the right board is an absolute necessity for connecting processors to GPUs, and every other components inside your machine.

This is where AMD gives a lot of love to its customers, whereas Intel tends to skimp more often. Nowhere is the difference between both company's mainstream parts more evident than in the chipset segment. The 990FX's 42 total PCIe 2.0 lanes provide a lot more potential throughput than Intel's popular Z68 Express, which is limited to 16 lanes from the CPU and a handful more on the Platform Controller Hub.
Of course, a fan of Intel's work could argue against the need for 42 lanes of second-gen PCIe when the 36 native to X58 Express support multi-card graphics configurations just as capably. But such a comparison really isn't necessary. After all, we've known for almost a year that Intel’s lower-cost Sandy Bridge-based part outperform the pricey six-core Gulftown-based processors in many desktop benchmarks, including pretty much every gaming scenario we throw at the two platforms. 

And, it just so happens that Intel's mainstream (and multiplier-unlocked) Core i5 and Core i7 chips are more in the same league as AMD's most expensive enthusiast-oriented FX CPU.

The Importance Of PCIe

Gaming is where the Sandy Bridge architecture most easily proves that you don't need a thousand-dollar processor to turn in the best frame rates, and that's in spite of the 16 lanes built into each CPU's die. We've even seen situations where an NF200 bridge soldered down onto a Sandy Bridge-based motherboard enables performance just as compelling as a high-end LGA 1366 configuration. The thing is, a Z68 or P67 platform's 24 total PCIe 2.0 lanes aren't explicitly set aside for graphics cards. They have to handle every device attaching via PCI Express, including network and storage controllers.

We’ve even tested a few "enthusiast-class" Sandy Bridge-based motherboards so loaded with features that simply installing an add-in card forced certain slots or on-board controllers to become disabled. That doesn’t sound like a solution a power user would willingly accept to us.

As of this moment, enthusiasts who need more connectivity than the LGA 1155 platform offers are left to choose between “upgrading” to one of Intel’s older LGA 1366 platforms, paying extra for a motherboard with bandwidth-sharing PCIe bridges, or shifting to a platform with more native PCI Express, a wider range of unlocked processors and prices, several times the reference clock overclocking headroom for locked processors, and a downright respectable chipset: AMD’s high-flying 990FX.

Today we consider a few of the most enthusiast-oriented Bulldozer-compatible motherboards that employ the 990FX northbridge.











 
12:00 AM - November 7, 2011 by Thomas Soderstrom

Tuesday, November 8, 2011

Cloud Database as a Service: Planning your DBMS strategy

No matter what kind of cloud computing services operators plan to offer, they need to have an effective cloud database strategy in place or customers won’t have access to the necessary data in their cloud applications.

With that growing awareness, one of the most significant (and complex) cloud infrastructure issues facing cloud providers of many types is deciding how database support will be offered in the cloud, which is also leading to selling Database as a Service.

The wrong cloud database strategy can create application performance problems significant enough to discredit a cloud service, forcing the provider to incur additional costs to establish credibility with users. Ready or not, in 2011, database capabilities in the cloud are going to become a differentiator and a factor in sale for cloud providers.

The cloud database issue is complicated because it sits at the intersection of two cloud infrastructure models, two storage service models and two database management system (DBMS) models. Sorting out the details will require cloud services providers to consider their infrastructure, network performance and service goals.

The following services models can affect cloud database support:
  • Single- and multi-site cloud infrastructure models. The two cloud infrastructure models differ in the way that resources are allocated to customers. In the single-site model, a customer’s applications run within a single data center in the cloud, even if multiple data centers are available. This means that the storage and/or DBMS resources used by a customer can be contained within a single storage area network (SAN), and that the customer’s application performance in the cloud can likely match that of a standard data center that uses virtualization. In the multi-site model, the customer’s applications can draw on resources from multiple data centers, which means that making the connection between the application and the database resources could involve WAN connectivity that limits performance. Whichever choice they make, service providers must be ready to address the issues that come with single- or multi-site cloud infrastructure.

  • Storage and database service models. The storage service models available to a cloud planner are Storage as a Service or the more complex Database as a Service. With storage services, the customer will access virtual storage devices as though they were native disk arrays, which means that the applications will send storage protocols (such as Fibre Channelover Ethernet or IP SCSI) over any network connection. In the relatively new Database as a Service offerings, applications will access storage through a cloud DBMS that will accept high-level database commands and return the required results. This can create a less delay-sensitive connection, so it is better suited to cloud configurations where storage might be distributed over multiple sites.
Another major cloud database planning decision is whether a cloud database service should be based on the popular relational database management system (RDBMS) and its Structured Query Language (SQL) standards, based on a lighter-weight RDBMS without SQL, or based on a non-relational structure like the Google BigTable structure that gives users dynamic control over data layout and format.Note: The terms NoSQL or NoREL have been applied to the latter two approaches, but they are used inconsistently.

Weighing the value of Database as a Service
For cloud planners, the most challenging issues may be deciding whether to roll out Database as a Service offerings, and if so, which DBMS to offer. Some cloud providers are starting to offer Database as a Service, which may compel other network operators to do the same for competitive reasons.
Database as a Service has advantages beyond marketing. With a cloud DBMS, storage virtualization isn’t directly visible to the applications, which gives operators more latitude in how they manage storage resources. With a direct storage model, a mechanism for storage virtualization that protects customers’ data from being accessed by others but still makes the virtual disks look “real” is essential. The efficiency of this process is paramount in controlling the performance of applications that use storage extensively. It’s easy to provide cloud database services as part of a cloud Platform as a Service (PaaS) offering, but the applications may have to be written to access cloud database services in some Infrastructure as a Service (IaaS) configurations.

If customers aren’t likely to access storage/DBMS across multiple data centers in the cloud, the performance implications of the storage/DBMS choices are less critical. But where multi-site resource distribution among customers is expected or required, it may be necessary to optimize performance by reducing the sensitivity of storage access to network performance, as well as enhancing network performance for site-to-site connectivity. Offering Database as a Service can help by replacing storage input/output(I/O) with simply sending a query and a return of results. Even when considerable data is retrieved by a query, the query/response traffic is typically much lower in volume than storage I/O traffic, reducing network congestion and improving performance.

Choosing the right DBMS for your cloud services
Overall, the type of DBMS offered in a cloud-based Database as a Service is a function of the needs of the applications, which in turn may be a function of how the service offerings are positioned. Planners preparing for large-scale enterprise services are probably considering an IaaS model, which suggests that cloud database services should be based on storage virtualization rather than on cloud DBMS. At the minimum, cloud services targeting enterprise application overflow or data center backup almost certainly demand an IaaS model, as well as a virtual storage service capability. Database as a Service can be offered in addition to Storage as a Service, but not likely as a replacement. For SMB cloud offerings, Database as a Service is likely to have more appeal.

The impact of DBMS issues on application performance will always be easier to control when cloud services are offered from a small number of local and highly interconnected data centers, or when individual customers are limited to a single data center or a tightly connected cluster. Network operators may be able to achieve sufficient resource scale within a single data center to reduce performance issues related to SAN extension across multiple sites, which could be a competitive advantage. Database as a Service can then be targeted more at customers’ application needs rather than at infrastructure and performance considerations.

Cloud database models are a major issue for buyers, even though they aren’t a mainstream media topic yet. They are a major factor in assuring that cloud services will perform well and offer the same level of data reliability as private data centers. Proper cloud database strategy selection can provide a network operator with a key feature differentiator in an increasingly competitive cloud services market and help increase both adoption rates and overall sales and revenue. A little time spent planning can pay a major dividend.

Author:Tom Nolle

Building a cloud computing infrastructure to serve dual purposes

We’ve been bombarded with cloud computing services terms that describe many types of services that can be offered through the cloud. For network operators, the challenge with multifaceted clouds is that they have a variety of business drivers. The catch is that if different drivers push cloud computing infrastructure in incompatible directions, the consequences could be dire for service providers’ capital and operations expenses, as well as for return on investment (ROI).

Unlike enterprise clouds, this diversity of business drivers requires service provider clouds to function as platforms for traditional OSS/BSS and internal IT, as platforms to host features and content and as platforms for cloud services for different groups of customers -- enterprises, SMBs and consumers. Virtually every operator would have a different balance of need/opportunity in each of these areas.
In addition to offering cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), Database as a Service and Hosting as a Service, internally, network operators are also giant IT consumers themselves. That means they build data centers not only for OSS/BSS and traditional applications, but for hosting content and features.
Network operators have most of the same concerns as enterprises about cloud computing infrastructure efficiency, application performance and even “cloudsourcing” to third parties, which means that operators are shifting their IT investment from an internal fulfillment strategy to one hosted in the cloud. For this supply-side vision of cloud migration, the best course is often dictated by the cloud strategies supported by the major IT vendors, just as it is for enterprises.

Yet the issue of business and network transformation represents a special challenge for service providers -- restructuring their network infrastructure to meet changing revenue goals and new service targets. Five years from now, most operators will be earning the majority of their revenues from sources that were minimal or non-existent contributors five years before.

Cloud computing infrastructure’s dual role -- internal IT and customer services
The service-driven transformation of network operators is more than a network infrastructure transformation. Content and social network services, mobile features, app stores and all of the things included as elements in the future of network services are predominantly hostedrather than simply connected. That makes a provider’s IT structure as critical as its network. Hosted services and features also make the architecture that binds these IT elements into a critically important set of cohesive resources.

The cloud is important to operators because it is the abstract model that nearly all providers are selecting for their own internal IT needs, which include hosting features and content, as well as supporting their back-offices or OSS/BSS systems. If service providers were to create a single cloud for all their diverse IT needs, it would be among the largest pools of IT resources ever built. And if cloud computing is an opportunity created by economy of scale, then operators will be the leading contenders to provide it.

This story works just as well in reverse. Network operators worldwide have listed cloud computing as one of their top three applications to create new revenues. A decision to offer cloud computing services and create infrastructure for these services -- as Verizon did by buying cloud provider Terremark -- will create an IT platform that can then be exploited for the operators’own IT applications. You could consider this a “service-first” cloud evolution.

The fact that service provider clouds obtain optimum economies of scale by serving many different missions means that their design has to support a range of uses that most private or even public cloud infrastructures could elect to avoid.

An example is the classic “IaaS versus PaaS versus SaaS” debate. Operators that want to host SaaS providers’ services may need to offer their partners IaaS or PaaS clouds, or both. Most cloud providers elect to support one approach, but a network operator’s cloud will likely have to support all three, since they offer customers different functions:
  • IaaS services are the baseline cloud offering due to Amazon’s EC2 popularity.
  • SaaS services are mandatory for small business and consumer offerings
  • PaaS services are essential for some enterprise data center workload overflow and backup applications.
In short, all models are needed.

Operators turn to PaaS as service delivery platform architecture
Many network operators are now conceptualizing their own internal use of cloud infrastructure in PaaS terms as well. If we were to describe the architecture of a service delivery platform in modern terms, we’d call it a PaaS host because it combines a hardware platform with a structured set of middleware that creates a uniform development environment. Feature-building missions for the cloud are certain to be supported by assembling and even creating custom middleware.

For operators that want to offer app stores, developers are likely to operate inside a micro-PaaS sandbox that includes tools to manage the offerings and integrate them with operator billing and support applications.

IT vendors customize cloud middleware for providers
Operators have many choices of cloud architecture, and some of the popular IT giants are now customizing cloud middleware into packages for use by providers as well as enterprises. IBM’s Cloud Service Provider Platform (CSP2) and Microsoft’s Azure Platform Appliance are examples of this trend, and the use of an integrated toolkit has the advantage of ensuring compatibility and management commonality across the elements of cloud infrastructure. Some operators are also considering clouds created from open source or commercial dual-licensed software like Eucalyptus (Eucalyptus Enterprise Edition) and Hadoop’s Cloudera
.
The key to successful cloud computing infrastructure
For operators, the key to creating successful cloud services and supporting internal IT on the same infrastructure is ensuring that an IaaS framework built on data center virtualization can be extended upward first to the platform and then to the services level without creating voids in the management processes, and without compromising a single-resource economy of scale value proposition.
In the final analysis, the combination of the economies of resources and management efficiency will define the provider cloud and differentiate it from other cloud computing offerings in the market.

Author: Tom Nolle

Building multiple cloud services from a common infrastructure

“e Pluribus Unum,” which translates to “from many, one,” is a familiar phrase written on U.S. coins. For service providers planning cloud computing services, the motto and challenge is exactly the opposite: How can they produce many services out of one infrastructure, or “e Unum Pluribus”?

Most service providers have neither the option nor the desire to pick out a single model of cloud service and focus investment on it alone. Leveraging their infrastructure and operations to produce superior economies of scale is critical, and from a sales perspective, it will be easier to exploit the cloud quickly when enabling multiple cloud services rather than having only one.

Technically, cloud computing is a resource-pool strategy. A set of servers and storage devices located in one or more data centers form a pool of resources that are allocated to service customers on an as-needed basis. The larger the resource pool, the greater the efficiency and lower the unit cost of computing and storage. A lower cost base then permits network operators to offer computing and Storage as a Service at prices compelling to the buyer and at margins profitable to the seller.

The value of generality at the cloud service level is clear: More services equal more sales, which equal more resource-pool efficiency and higher profits. But the question is how to achieve the multiservice goal with a practical set of cloud computing tools.

Analyzing the components that enable multiple cloud services
The strongest starting point to answer the question of how to achieve multiple cloud services using one set of cloud computing tools is to consider how the cloud computing service in question—Infrastructure as a Service (Iaas), Platform as a Service (PaaS) or Software as a Service (SaaS)—would appear to a buyer and what the cloud resource interface needs to resemble. The goal of a universal infrastructure would be to offer all of those service appearances and interfaces at comparable efficiency and with full management capability. Where the requirements compromise the ability to use one infrastructure, operators need to weigh the cost and benefits of that offering. Here’s a closer look at the requirements of the three cloud services.

Infrastructure as a Service. IaaS services would logically appear to the buyer as a virtual machine host that is essentially an extension of the buyer’s own data center virtual resource pool. Running an application on a virtual host means creating a machine image that includes the application and its operating system software and middleware, and then loading it on the selected cloud server.
Since the application image and operating software is provided by the user, the cloud provider has limited options for managing the internal behavior of the application. The virtual machine host can be managed but not the machine image software. Still, this application is what made Amazon’s EC2 famous, and operators almost universally expect to support it. A virtualized set of servers and a director function are required to assign free virtual machines to the best available server resource.

Platform as a Service. PaaS services can be viewed as an IaaS framework that hasdefined one or more middleware “services” that are available to the applications within the cloud. Therefore, these applications are not included in the machine image provided by the user. The most common platform service is Database as a Service, followed closely by management services.

To the user, the value in consuming a platform service versus having a completely customer-provided machine image of the application is that the platform service is cloud-aware and can be managed and optimized for a cloud-hosted execution. Cloud databases are almost mandatory for applications that will involve data exchange among application components. Otherwise, these components would have to develop an internal mechanism for data sharing that could be relied upon to work efficiently within a cloud, where resource-to-application assignment is nearly invisible to the user. Cloud management tools, including tools to manage application performance, allow both users and the cloud provider to merge application performance issues with virtual machine resource performance, giving a better management picture.

As applications begin to consume platform services, the IaaS framework is likely to take on the appearance of a data center optimized for service-oriented architecture (SOA). The idea of the user providing a “machine image” is replaced by the idea of the user providing an application to catalog—an application that will be represented in an SOA directory (UDDI) and instantiated on demand, using the provider’s virtual machines. This represents PaaS services as effectively an SOA overlay on IaaS infrastructure, a model that seems explicitly adopted by Microsoft with Azure and by IBM with its Cloud Service Provider Platform.
Platform services are represented as SOA services and accessed as any internal software service would be accessed. This also facilitates the integration of PaaS cloud services with enterprise software; any SOA tool can provide the binding required.
Software as a Service. Moving up the stack to SaaS, we again find that the next level of cloud services can begin as an extension of the last. The SOA abstraction of a “service”can be applied both to application components and to entire applications. In the former case, a SaaS offering would be associated with a SOA-compatible interface (including Simple Object Access Protocol, SOAP), and could be registered as a service in a SOA directory for access as described with a PaaS cloud service. In this case, an enterprise might be consuming a cloud SaaS component within an application otherwise hosted in-house.
If the entire application were to be represented as a service, then the SaaS offering could be made through a traditional URL or RESTful interface and be accessed by a browser. This would be the type of SaaS service that well-known Salesforce.com provides. Operators could provide these types of services directly (using open source software, licensed software or software they develop), or they could offer wholesale PaaS or IaaS support to third parties that would then offer SaaS services.\

Building more complex cloud services on the components of the others
Cloud services at the higher levels, which include SaaS and PaaS, displace more user support and license costs and, therefore, justify higher prices. If operators can use the same infrastructure to offer these services as they use for the more general IaaS services, they can increase their profits and provide cloud services more easily integrated with enterprise IT. That can be particularly valuable in addressing the data center backup and application overflow opportunities that enterprises value most highly for cloudsourcing.

Author:Tom Nolle

A cloud provider's guide to building multiservice cloud platforms

Following the initial excitement that cloud computing will change life as we know it, we’re now in the heavy-lifting phase that — over time — will make cloud services a trusted reality.
But to help enterprises, organizations and government agencies embrace the cloud for mission-critical applications rather than only for an economical test and development platform, cloud providers have to step up in a big way to design the right infrastructure, whether they’re entrenched telecom providers, newer cloud specialists or niche providers
Challenged on many fronts to usher in the era of scalable and reliable cloud services, providers have to design an architecture and build cloud platforms that can accommodate multiple services without losing the economies of scale that made cloud services an attractive proposition in the first place.

This expert lesson on building multiservice cloud platforms like a pyramid scheme (but in a good way), by frequent contributor Tom Nolle, looks at a number of issues all cloud providers have to address, including capitalizing on a common cloud platform to build multiple services, using your cloud platform for internal and purposes and getting your cloud database strategy in order to facilitate application performance.
Here are the building blocks you need to get your cloud platform strategy up to speed.

1. Building multiple services to operate on one cloud platform
First and foremost, most cloud providers don’t have the luxury of building a cloud platform for each service they plan to offer, which means that to make the economics work for themselves and for their customers, they need to leverage their infrastructure to build multiple services to operate off one cloud platform. Using your resource pool of data centers, servers and storage devices wisely is the key to building more complex cloud services. Providers can start with basic cloud platform elements needed for Infrastructure as a Service, then build more complex cloud services including Platform as a Service, Software as a Service and beyond, on top.
 
 
2. Building a cloud computing infrastructure to serve dual purposes
Moving forward, cloud providers have to remember that they are giant IT consumers themselves, and they need to benefit from their cloud infrastructure as much as any customer. Thinking outside the last-gen silo mentality, cloud providers need to move beyond supporting multiple cloud services for customers and make sure their cloud platforms function for their own internal OSS/BSS and internal IT needs. Any provider’s cloud platform has to fill the dual roles of internal IT and customer services, which means using Infrastructure as a Service architecture built on data center virtualization to benefit from the economies of scale.
Understand the dual-purpose cloud platform for internal and external use.
 
 
3. Cloud Database as a Service: Planning your DBMS strategy
To offer cloud services, cloud providers have to have an effective database strategy—otherwise customers won’t be able to get to their data. So why not consider rolling out Database as a Service at the same time, since you’ll need a cloud database strategy that won’t affect application performance anyway? Database as a Service is a good differentiator for cloud providers, but it requires careful analysis of your cloud infrastructure, your storage service model and your database management system models.
Read this article to find out if Database as a Service should be in your future.

Author:Tom Nolle

Friday, November 4, 2011

Time for Cyber Discourse on China

China is being accused of hacking corporate, government and military networks in the U.S. for economic gain.  Policy makers need to be versed in cyber security and figure out how to respond.  

JAMIE METZL CAUSED quite a stir late this summer with an article he wrote for the Wall Street Journal in which he blasted China’s computer hacking efforts. Metzl, executive vice president of Asia Society and a former higher-up in the State Department and National Security Council, condemned China’s actions as “running roughshod over global norms” to advance its economic interests.
Unfortunately for him, he used McAfee’s Shady Rat research—which received criticism from several experts in the industry—as the backbone for his diatribe against China. Regardless, the bigger point here is China’s 10 percent annual economic growth, a staggering number according to bean counters, isn’t exactly being built solely on blood, sweat and tears Metzl and others we’ve talked to and listened to say China is relentless in its efforts to steal intellectual property, trade and corporate secrets, and anything else that will give them an economic edge—or growth spurt. I’ve had more than one casual conversation land on the topic that some product some startup has been slaving over suddenly shows up on the China market months ahead of a potential launch here.
 Are we covering new ground here? No. But it’s worth reminding those who will listen that the Chinese are on our networks and are leveraging state-sponsored or politically motivated computer hackers to steal anything that isn’t nailed down. China’s efforts aren’t limited to big business either. Despite Art Coviello’s best efforts to tap dance around the obvious, I’ll take some journalistic license to read between the lines and conclude the Chinese were behind the SecurID attack. The attacks that compromised the company’s flagship SecurID authentication technology have been the security story of the year. The seriousness of the attacks quickly came to light when it was revealed they were merely a jumping off point for a downstream attack on the defense industrial base as Lockheed Martin and others subsequently reported they too had been breached.
China computer hacking is also the suspected culprit behind the Aurora attacks on Google, Adobe and upwards of 20 other enterprises, manufacturers and defense contractors in 2009. Plus, two Department of Defense reports released in the last 20 months name China as active in moving digital assets off American networks—corporate, government and military. Can we stop the politically correct pretense and examine closely in public circles the impact of these intrusions upon our economy and national well-being? Granted, if we cast that spotlight on the Chinese, we’re likely to get an equally bright light shined upon U.S. activities in China, Iran (hello Stuxnet) and other foreign interests. So be it. It’s time for ground rules and time to tame the Wild West before real lives are lost, not just nuclear centrifuges and software source code. There needs to be discourse at a policy level in Washington on cybersecurity and a clear understanding from legislators on these activities and their ramifications. The call for “offensive” weapons in cyberspace is also rattling around offices at the NSA and DoD and clearly some have been developed (hello again Stuxnet), but there are no rules of engagement written in stone yet in terms of how to react and reply to cyberattacks. How long before a physical, military response from either side follows up a cyberattack perpetrated by either side without a means for attribution of the attack or channels of communication between policymakers well versed in cyber? The Chinese aren’t shy about taking land or IP by eminent domain it seems. Pretty much anything is in scope to advance their economic agenda, according to Metzl’s op-ed in the Journal. If so, it’s time to bring cyber to prominence in Washington and internationally begin some real forward thinking before real companies are unable to compete in their respective markets, or worse, real lives are lost. BY MICHAEL S. MIMOSO