So excited to get my invite to the VMware HOL, I couldn’t help but share. There have already been a lot of people on the Twitosphere who have had access so not all of this may be new. Having spent 5 of the last 6 years in companies who specialized in training (3 at a SaaS LMS/HRIS vendor and almost 2 at an HR consulting company) I am absolutely in love with the interface of the HOL. Running in Chrome there is almost no delay in page or menu loads and the VM console performance is excellent.
The walk through of the labs via the ‘Manual’ section (at least on my first lab) is well written, provides a useful backdrop to the work and moves very nicely from one task to the next. The labs (again this is based on my first lab, and won’t mention that again) also walk you through some related scenarios versus just a step by step of that specific task. For example if I am changing switch settings, I also have to adjust the VMs connected to that network. I particularly enjoyed this because it also exposed me to the 5.1 interface since I had not yet updated my home lab (which is getting dusty again).
I have to double check the T’s & C’s to make sure I can share more, which if I can I will post a few videos and screenshots
Since I have seen a few other blogs with more screenshots, I decided to add another.
When using shared storage where deduplication is utilized along with an array level snapshot based backup solution, what can be done to minimize the wasted capacity of snapping transient files in backups and the CPU overhead on the storage controller having to attempt to deduplicate data which cannot be deduped?
1. Virtual machine memory reservations cannot be used to reduce the vswap file size
1. Reduce the snapshot size for backups without impacting the ability to backup and restore
2. Minimize the overhead on the storage controller for deduplication processing
3. Optimize the vSphere / Storage solution for maximum performance
1. Configure the HA swap file policy to store the swap file in a datastore specified by the host.
2. Create a new datastore per cluster which is hosted on Tier 1 storage and ensure deduplication is disabled on that volume
3. Configure all…
View original post 276 more words
What is the most suitable DRS automation level and migration threshold for a vSphere cluster running an IaaS offering with a self service portal w/ unpredictable workloads?
1. Workload types and size are unpredictable in a IaaS environment, workloads may vary greatly and without notice
2. The solution needs to be as automated as possible without introducing significant risk
1. Prevent unnecessary vMotion migrations which will impact host & cluster performance
2.Ensure the cluster standard deviation is minimal
3. Reduce administrative overhead of reviewing and approving DRS recommendations
1.Use Fully automated and Migration threshold 1 – Apply priority 1 recommendations
2.Use Fully automated and Migration threshold 2- Apply priority 1 & 2 recommendations
3. Use Fully automated and Migration threshold 4- Apply priority 1,2,3 and 4 recommendations
4.Use Fully automated and Migration threshold 5- Apply priority 1,2,3,4 & 5 recommendations
5. Set DRS to manual…
View original post 208 more words
Microsoft has been, and for the most part still the standard for enterprise platforms and applications. For the most of the last 2 decades, they maintained a consistent user experience, added new functionality to their platforms and improved the performance of those platforms. With Windows 8 and Server 2012 I feel as though they have become to focused beating what Apple did in the consumer space by redesigning the user experience to something we don’t realize we want yet. In the consumer space, that is great/fine – NOT in the enterprise.
In addition to the UI, I also feel like Microsoft has not focused enough on continuous, predictable improvement (i.e. NT >> 2000 >> 2003 >> 2008) and instead gone for bulk change, sometimes (like is the case with Exchange) changing architecture to drastically. I liked Exchange 2003, 2007 was not bad and for the most part really liked 2010 (the management GUI was meh but Powershell awesome). I also liked the segmentation between roles which, now with Exchange 2013, they have done away with.
It’s flux like Exchange, drastic overhauls to HyperV (which they might have got right in 2012 but who knows if they will keep it around) and lack of commitment in consumer products (Zune, Kin, soon to be Surface) that makes me reluctant to trust my enterprise platforms to Microsoft. Even license changes that seemed to be somewhat understandable the last two or three years are being changed again.
Is this just part of a knee jerk reaction to the Surface being not good or do these changes have you wondering about Microsofts direction as well?
So much great technology, so little time to test it all!
The Grizzly release of OpenStack Nova will have a new service, nova-conductor. The service was discussed on the openstack-dev list and it was merged today. There is currently a configuration option that can be turned on to make it optional, but it is possible that by the time Grizzly is released, this service will be required.
One of the efforts that started during Folsom development and is scheduled to be completed in Grizzly is no-db-compute. In short, this effort is to remove direct database access from the nova-compute service. There are two main reasons we are doing this. Compute nodes are the least trusted part of a nova deployment, so removing direct database access is a step toward reducing the potential impact of a compromised compute node. The other benefit of no-db-compute is for upgrades. Direct database access complicates the ability to do live rolling upgrades. We’re working…
View original post 272 more words
Last week I was having a discussion with an Infrastructure as a Service (IaaS) provider and one of the questions I left with was how can I integrate your service with my existing internal infrastructure. My thought was I could build a private-hyrbrid cloud (if that’s not a term yet I own it – my definition of a private-hybrid cloud is being able to provide on demand resources between an internal/traditional company owned data center and an external service provider who can dedicate private resources) between my internal infrastructure and and this vendor to provide on demand infrastructure, scaling and high availability. This got me to thinking of how I might be able to meet the sometimes unreasonable SLA’s asked of technology groups from the business and further started to wonder why the SLA’s have kept increasing even though IT budgets are being slashed.
I started to think back to meetings between non-technology business leaders (e.g. sales, marketing, finance, etc…) and myself as we discussed what they wanted from IT. Typically when I am architecting a system or network design one of my first questions is to those business leaders to explain what their expectations are. On many occasions the answer has been “100% up-time.” We all know in IT that’s not really reasonable which is why we add-in scenarios about maintenance and vendor bugs not counting against that SLA. Now, if I am an IT person and my CEO and CFO say they want 100% up-time – well great I can certainly design you a very resilient, high performance infrastructure that can even overcome poor software code to recover from application or system crashes. One problem typically comes up however – the desire to have 100% up-time does not typically equate to the budget to build that type of infrastructure. When we are reviewing the design and budget to try and reach that 100% up-time requirement the comment I hear quite often is something along the lines of “Why does it cost so much, Facebook/LinkedIn/other consumer based website never goes down and I use that for free” or “Why do we have to spend so much on storage? I can upload all the pictures I want to Facebook and I use that for free.” Once you explain that those services you are consuming from Facebook or LinkedIn is not actually their business, their business is big-data, business intelligence and advertising I am typically able to re-focus the meetings on the real needs of the business and not a false expectation based on the perceived up-time of consumer based services and determine a real SLA for various systems and applications.
So while we typically think of the consumerization of IT in terms of BYOD and related needs such as security and monitoring or enterprise social networking, it reaches all the way through the network and infrastructure right into policies, procedures and service level agreements. Have you had a similar experience when working on your projects or budgets?
It was just about 8 years ago I really fell in love with virtualization, thanks actually to Microsoft releasing Virtual Server, but there was really only one product I wanted to get my hands on and that was VMware ESX (or GSX – I wasn’t going to be picky). At the time my company was a subsidiary of a larger company and we were starting to integrate IT departments, in other words eliminate the one I was part of, however it finally allowed me to get my first taste of VMware ESX – vMotioning a running Lotus Domino server between two physical hosts – I knew right there this was the technology I wanted to be part of. This is my (abriviated) story on how I got to my VCP5-DV certification.
Having taken the VCP5-DV test (twice), a key element to passing the exam is hands on experience. Though I wasn’t able to do my first deployment of VMware on ESX 3.5i when it launched in 2007, I had pushed virtulization efforts quite heavily at two companies I was working for. One didn’t get their feet wet with virtualization until I left despite building several labs based on Virtual Server and the second I had to go the free route, leveraging Virtual Iron to build the internal infrastructure on and VMware Server to setup our QA engineers with a testing environment (they later went full VMware). While I wasn’t using VMware products, the experience in learning how to build a virtual infrastructure was quite beneficial in building my first ESX 3.5i environment – an internal POC for my company – a company that was opposed to virtualization all the way up to the CEO. Using my previous experience in building virtual labs, I learned many of the “gotchas” that can kill a virtual environment very quickly and found that my predecessors fell victim to those gotchas. Since then I have deployed multiple 3.5i, 4, 4.1 and 5 production environments. The point here, don’t skimp on taking time to build your environments, even if they are small – the experience is very much worth while.
There are also several great practice tests out there. These are a great gauge of your ability to interpret questions and find the right answer. The two I found most useful were the actual VMware practice tests at http://goo.gl/MI52l and Simon Long’s test at http://goo.gl/jjsgU . Take these early, gauge where you are and leverage them as a tool to understand where you need to focus your preparation and study as well as an ongoing assessment of how you are progressing.
Before starting formally studying – reading books, taking classes, or setting up your home lab; engage yourself in the VMware community. There are a lot of great people on Twitter, LinkedIn and the VMware Community forums. Meeting these people, being able to learn from something as simple as a 140 character tweet was invaluable to me. Next, don’t forget to review the exam blueprint. You can download this from VMware at http://goo.gl/0IffB which is good, but there have been several people who have taken the time to provide study materials based on the blueprint. My personal favorite version of this was by Mike Preston and can be found at http://goo.gl/wJS3M , however another great version from Josh Cohen and Jason Langer can be found at http://goo.gl/4XVU9. Josh and Jason’s version I found very useful when setting up my lab as it had a lot of step by step information where as Mike’s was in more of a narrative form. On the more formal reading I read the bible – Scott Lowe’s Mastering vSphere 5, the VMware Press Official VCP5 study guide and Brian Atkinson’s Study Guide, all can be found on Amazon.
For a class perspective, there is obviously a cost concern between the Install & Configure class versus the Fast Track, but if you are making the investment I would highly recommend the Fast Track. Keep an eye out, I have seen the class offered from VMware for about the same cost if there are openings in the class just a few weeks prior to its start so you can get the benefits of the Fast Track for the cost of the Install & Configure. The Fast Track class also gives you a voucher to take the exam for free, also a VMUG membership offers a free re-take voucher if you take the Fast Track course through Global Knowledge in case you do not pass the first time (like I did).
When scheduling your class, my advice would be to schedule it early on a Monday. From my perspective this gave me two days to study and prepare and I was able to avoid distractions that may come up during the week. As I mentioned previously, I took the exam twice, and have two “weekend before” scenarios. The fist time I took the exam I focused heaving on reviewing technical documents/white papers from VMware (a list of what I reviewed can be found here http://goo.gl/9hkop). I missed passing the exam the first time by 2 questions. Bad luck? Over saturated my brain? I think maybe a little bit of both. I re-scheduled my exam as soon as possible, VMware requires waiting at least 7 days, the center I took the test at didn’t have any availability so I had to wait an extra day. This time, I went a little easier on myself. I focused just on the two blueprint study guides I mentioned earlier and reviewed some of the areas I knew I did not do well on from my previous exam. Since I had an extra day (I took the exam on Tuesday) I added some cram notes to my reading on Monday night from Vidad Cosonok which you can find here http://goo.gl/1NHqk to re-enforce some of the basics. I showed up Tuesday morning, drove into the same parking lot, in the same spot and had to empty my pockets into the same locker I had done 8 days before (I was a little worried at this point that I was having a deja vu). I passed my test by a few questions for a buffer. I think I can safely say I passed because of everything I mentioned above – experience, reading, study guides, class, practice test etc…
I hope my experience will help others who are looking to go for their VCP… okay so maybe this wasn’t as abbreviated as I first though, as Christopher Kusek has mentioned to me before I am a bit “verbose”
Guest Post by Kanji B.
These are some notes from my nested lab setup on a Dell OptiPlex 790 from (4x i5-2400 @ 3.10GHz, w/16GBs, and it supports VT!), I hope that these can help others in the VMware community doing the same.
I ran into my first gotcha quite early thanks to Dell’s love for bleeding edge NICs. It’s one of the few things that drives me nuts about Dell hardware, and I facepalmed as soon as ESXi threw up it’s “No network adapters detected” error so off I went to research how – or even if! – I could inject drivers into the ESXi install, and fortunately stumbled on someone who had already done so on the OptiPlex 790:- http://bohemiangrove.co.uk/esxi-5-0-the-free-one/
A short while later, I had a fresh ESXi install and began installing my nested ESXi, when I ran into the SAME problem! WTF?! The host ESXi had networking, so why wouldn’t the guests? Turns out that the default Adapter type for RHEL 6 (the Guest type which a few of the nesting guides suggest you base your ESXi guest on) is vmxnet3, and there’s no vmxnet3 driver in ESXi 5.0 and installing VMware Tools to get it wasn’t going to happen.
Poking around, I managed to fix it by using an E1000 adapter instead, and then noticed that virtuallyGhetto touched on this last month (http://www.virtuallyghetto.com/2012/09/nested-esxi-51-supports-vmxnet3-network.html) as they noticed that 5.1 fixes this very issue. That solved, I took another stab at installing a nested ESXi, only to hit another showstopper when the installer didn’t detect any local or remote drives to install on. Poking around some more, I noticed that the SCSI Controller Type was set to VMware Paravirtual (not recommended for this guest OS), ugh, bitten by RHEL 6 defaults again… For reference, if you set it to LSI Logic Parallel, ESXi sees the provisioned drive as local; or remote if set to LSI Logic SAS.
Ironically, if I had just gone with Windows 2008 R2 x64 (the default Guest type), I wouldn’t have run into either issue, as VMware defaulted to a supported Adapter and SCSI Controller!