Classicnewcar.us


cartogram Commission on Map Design

cartogram Commission on Map Design

By Gary BlakePrior to the release of vCloud Automation Center (vCAC) v5.2 there was no awareness or understanding of vCenter Site Recovery Manager protecting virtual machines. However, with the introduction of vCAC v5.2, VMware now provides enhanced integration so vCAC can correctly discover the relationship between the primary and recovery virtual machines.These enhancements consist of what may be considered minor modifications, but they are fundamental enough to ensure vCenter Site Recovery Manager (SRM) can be successfully implemented to deliver disaster recovery of virtual machines managed by vCAC. When a virtual machine is protected by SRM a Managed Object Reference ID (or MoRefID) is created against the virtual machine record in the vCenter Server database.Prior to SRM v5.5 a single virtual machine property was created on the placeholder virtual machine object in the recovery site vCenter Server database called “ManagedBy:SRM,placeholderVM,” but vCAC did not inspect this value, so it would attempt to add a second duplicate entry into its database. With the introduction of 5.2, when a data collection is run, vCAC now ignores virtual machines with this value set, thus avoiding the duplicate entry attempt.In addition, SRM v5.5 introduced a second managed-by-property value that is placed on the virtual machine vCenter Server database record called “ManagedBy:SRM,testVM.” When a test recovery process is performed and data collection is run at the recovery site, vCAC inspects this value and ignores virtual machines with this set. This too avoids creating a duplicate entry in the vCAC database.With the changes highlighted above, SRM v5.5 and later—and vCAC 5.2 and later—can now be implemented in tandem with full awareness of each other. However, one limitation still remains when moving a virtual machine into recovery or re-protect mode: vCAC does not properly recognize the move. To successfully perform these machine operations and continue managing the machine lifecycle, you must use the Change Reservation operation – which is still a manual task.In performing the investigation around the enhancements between SRM and vCAC just described, and on uncovering the need for the manual change of reservation, I spent some time with our Cloud Solution Engineering team discussing the need for finding a way to automate this step. They were already developing a tool called CloudClient, which is essentially a wrapper for our application programming interfaces that allows simple command line-driven steps to be performed, and suggested this could be developed to support this use case.In order to achieve fully functioning integration between vCloud Automation Center (5.2 or later) and vCenter Site Recovery Manager, adhere to the following design decisions: Q. When I fail over my virtual machines from the protected site to the recovery site, what happens if I request the built-in vCAC machine operations?A. Once you have performed a Planned Migration or a Disaster Recovery process, as long as you have changed the reservation within the vCAC Admin UI for the virtual machine, machine operations will be performed in the normal way on the recovered virtual machine.Q. What happens if I do not perform the Change Reservation step to a virtual machine once I’ve completed a Planned Migration or Disaster Recovery process—and I then attempt to perform the built-in vCAC machine operations on the virtual machine?A. Depending on which tasks you perform, some things are blocked by vCAC, and you see an error message in the log such as “The method is disabled by ‘com.vmware.vcDR’” and some actions look like they are being processed, but nothing happens. There are also a few actions that are processed regardless of the virtual machine failure scenario; these are Change Lease and Expiration Reminder.Q. What happens if I perform a re-provision action on a virtual machine that is currently in a Planned Migration or Disaster Recovery state?A. vCAC will re-provision the virtual machine in the normal manner, where the hostname and IP address (if assigned through vCAC) will be maintained. However, the SRM recovery plan will now fail if you attempt to re-protect the virtual machine back to the protected site as the original object being managed is replaced. It’s recommended that—for blueprints where SRM protection is a requirement—you disable the ‘Re-provision’ machine operation. Gary Blake is a VMware Staff Solutions Architect & CTO AmbassadorBy Anand VaneswaranIn my previous post, I examined creating a custom dashboard in vRealize Operations for Horizon that displayed my current cluster capacity metrics in my virtual desktop infrastructure (VDI) environment. This helped provide insight into the current utilization and performance. In the final post of this three-part blog series, I’ve provided instructions on creating a one-click IT command center operations dashboard. Many enterprises tend to centralize their IT command center operations in an effort to coordinate multiple technology focus areas such as network, storage, Microsoft Exchange, etc. and bring them together under one roof. The idea is to be able to see, respond to, and resolve incidents that cause production environment outages that have a wide range of implications. It is also to increase efficiency to speed up response times, and create a centralized view of the overall environment. In a production VDI environment, the onus would then fall on the command center to be able to incorporate VDI as a technology focus area. In this blog I’ll explain how to create a one-click dashboard to focus on certain key stats that are central to the VMware View environment, and help the command center personnel in times of outages.As I have stated in previous posts, these are examples that can either be replicated in their entirety, or be used as a jumping-off point in an effort to construct a custom dashboard with stats that are most germane to your environment and command center personnel.Additionally, as I have done in previous posts, I’m going to rely on a combination of “heat map” and “generic scoreboard” widgets. I’m also going to introduce a widget type known as “resources” in this dashboard. In total there should be nine widgets:The final output should look like this:I then want to configure my widgets so the following details are presented:Heat mapsGeneric scoreboard widgetsResources widgetsThe widgets can be arranged in the dashboard however you choose.To start, I’m going to configure a full clone widget to display the health of my ESXi hosts running full clone desktops, and I want to place the heat map widget on the far left side of a three-column dashboard.The key here is to filter by the hosts running full clone desktop workloads. This is achievable with a custom resource tag I’ve created in my environment. I’ve demonstrated the technique to create such a resource tag in the first of this three-part blog series. The configured widget should look like this:Repeat this procedure for another widget for linked clone desktop pools, and filter by the hosts running linked clone workloads.The configured widget will look like this:Next, I want to configure the following widget to display the health of my View infrastructure servers, and I want to place this in between the first two widgets along the top portion of the dashboard. It is important to place the infrastructure server resources in custom resource tags so they are filtered by said resources.Here is the configured widget:Next, a generic scoreboard widget is placed underneath the heat map widget we just configured. This widget will display the number of enabled connection servers that accept incoming connections.When complete it will look like this:The next step is a generic dashboard that displays just the total number of tunneled sessions through the View Security Server.And here is the end result:We now want a heat map that displays the number of available virtual machines in the automated linked clone pools. In order to ensure production pools are more or less consistent during peak times, we need a heat map that shows the maximum number of desktops and total sessions.Once again, the trick is to filter by a resource tag for your automated linked clone pools; the heat map will look like this:Next I want to work on a generic scoreboard that gives me the following details:–        Total number of current concurrent sessions–        Total number of overall virtual machines in my environment–        Workload percentage on the DHCP vLAN, which is serving all VDI desktops IPs–        A super metric that calculates the average bandwidth utilization in Kbps–        Outbound and inbound DCHP vLAN packet errorsThe widget should be configured like this:Super metrics are required to calculate the average bandwidth utilization per session, and the total DHCP vLAN bandwidth utilization. Here is the super metric for calculating the average bandwidth utilization per session.We also need a super metric to calculate desktop DHCP vLAN total bandwidth utilization.Finally, configure the two resources widgets. The first widget goes on the bottom left of the dashboard, and is configured as follows:The end result will appear like this:Make sure to filter by the custom resource tag containing only full clone pools. Replicate this process, step-by-step, on the bottom right-hand side of the dashboard, but this time for linked clone pools.And here is the final dashboard!In conclusion, here are a few takeaways from this blog:Now, I’ve barely scratched the surface of VMware vRealize Operations Manager capabilities in these blog posts; there is so much more that has not yet been discussed. I just wanted to focus on a set of custom dashboards, where each one is designed to achieve a very specific purpose. The methods detailed in these blog posts only demonstrate one approach – but there are others. These show just some of the ways vRealize Operations Manager can be explored, data can be mined, and ways you can gain a view into the environment.Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.By Gabor KarakasData centers are wildly complicated in nature and grow in an organic fashion, which fundamentally means that very few people in the organization understand the IT landscape in its entirety. Part of the problem is that these complex ecosystems are built up over long periods of time (5–10 years) with very little documentation or global oversight; therefore, siloed IT teams have the freedom to operate according to different standards – if there are any. Oftentimes new contractors or external providers replace these IT teams, and knowledge transfer rarely happens, so the new workforce might not understand every aspect of the technology they are tasked to operate, and this creates key issues as well. Migration or consolidation activities can be initiated due to a number of reasons:When the decision is made to move or consolidate the data center for business or technical reasons, a project is kicked off with very little understanding into the moving parts of the elements to be changed. Most organizations realize this a couple of months into the project, and usually find the best way forward is to ask for external help. This help usually comes from the joint efforts of multiple software and consultancy firms to deliver a migration plan that identifies and prioritizes workloads, and creates a blueprint of all their vital internal and external dependencies.A migration plan is meant to contain at least the following details of identified and prioritized groups of physical or virtual workloads:Any special requirements that can be obtained either via discovery or by interviewing the right peopleIn reality, creating such a plan is very challenging, and there can be many pitfalls. The following are common problems that can surface during development of a migration plan:It is vital that communication is strong between all involved, technical details are not overlooked, and all information sources are identified correctly. Issues can develop such as:Technical and human information sources are equally important, as automated discovery methods can only identify certain patterns; people need to put the extra intelligence behind this information. It is also important to note that a discovery process can take months, during which time the discovery infrastructure needs to function at its best, without interruption to data flows nor appliances.As previously stated, team communication is vital. There is a constant need to:It is important to accurately identify and document deliverables before starting a project, as misalignment with these goals can cause delays or failures further down the timeline.With major changes in the IT landscape, there are also Human Resource-related matters to handle. Depending on the nature of the project, there are potential issues:It can be part of an outsourcing project that moves certain operations or IT support outside the organizationSome of these people will need to help in the execution of the project, so it is crucial to treat them with respect and to make sure sensitive information is closely guarded. The blueprinting team members will probably know what the outcome of the project will bring for suppliers and the customer’s IT team. If some of this information is released, the project can be compromised with valuable information and time lost.When delivering a migration blueprint, each customer will have different demands, but in most cases, the basic request will be the same: to provide a set of documents that contain all servers and applications, and show how they are dependent on each other. Most of the time, customers will also ask for visual maps of these connections, and it is the consultant’s job to make sure these demands are reasonable. There is only so much that can be visualized in a map that is understandable, so it is best to limit the number of servers and connections to about 10–20 per map. The following complex image is an example of just a single server with multiple running services discovered. Figure 1. A server and its services visualized in VMware’s ADP discovery toolBeyond putting individual applications and servers on an automated map, there can also be demand for visualizing application-to-application connectivity, and this will likely involve manipulating data correctly. Some dependencies can be visualized, but others might require a text-based presentation.The following is an example of a fictional setup, where multiple applications talk to each other―just like in the real world. Both visual and text-based representations are possible, and it is easy to see that for overview and presentation purposes, a visual map is more suitable. However, when planning the actual migration, the text-based method might prove more useful.Figure 2. Application dependency map: visual representationFigure 3. Application dependency map: raw discovery dataFigure 4. Application dependency map: raw data in pivot tableIt is easy to see that a blueprinting project can be a very challenging exercise with multiple caveats and pitfalls. So, careful planning and execution is required with strong communication between everyone involved.This is the first in a series of articles that will give detailed overview, implementation and reporting methods on data center blueprinting. Gabor Karakas is a Technical Solutions Architect in the Professional Services Engineering team and is based in the San Francisco Bay Area.  By Michael Francis Over the last eight years at VMware I have observed so much change, and in my mind it has been transformative change. I think about my 20 years in IT and the changes I have seen, and feel the emergence of virtualization of x86 hardware will be looked upon as one of the most important catalysts for change in information technology history. It has modified the speed of service delivery, the cost of that delivery and subsequently has enabled innovative business models for computing – such as cloud computing.I have been part of the transformation of our company in these eight years; we’ve grown from being a single-product infrastructure company to what we are today – an application platform company. Virtualization of compute is now mainstream. We have broadened virtualization to storage and networking, bringing the benefits realized for compute to these new areas. I don’t believe this is incremental value or evolutionary. I think this broader virtualization―coupled with intelligent, business policy-aware management systems―will be so disruptive to the industry that it will be considered a separate milestone potentially, on par with x86 virtualization.Here is why I think the SDDC is significant:As a principal architect in VMware’s team responsible for the generation of tools and intellectual property that can assist our Professional Services and Partners to deliver VMware SDDC solutions, the last point is especially interesting and the one I want to spend some time on.As an infrastructure-focused project resource and lead over the past two decades, I have become very familiar developing design documents and ‘as-built’ documentation. I remember rolling out Microsoft Windows NT 4.0 in 1996 on CDs. There was a guide that showed me what to click and in what order to do certain steps. There was a lot of manual effort, opportunity for human error, inconsistencies between builds, and a lot of potential for the built item to vary significantly from the design specification.Later, in 2000, I was a technical lead for a systems integrator; we had standard design document templates and ‘as-built’ document templates, and consistency and standardization had become very important. A few of us worked heavily with VBScript, and we started scripting the creation of Active Directory configurations such as Sites and Services definitions, OU structures and the like. We dreamed of the day when we could do a design diagram, click ‘build’, and have scripts build what was in the specification. But we couldn’t get there. The amount of work to develop the scripts, maintain them, and modify them as elements changed was too great. That was when we focused on the operating stack and a single vendor’s back office suite; imagine trying to automate a heterogeneous infrastructure platform.Today we have the ability to leverage the SDDC as an application programming interface (API) that abstracts not only the hardware elements below and can automate the application stack above― but can abstract the APIs of ecosystem partners.This means I can write to one API to instantiate a system of elements from many vendors at all different layers of the stack, all based on a design specification.Our dream in the year 2000 is something customers can achieve in their data centers with SDDC today. To be clear – I am not referring to just configuring the services offered by the SDDC to support an application, but also to standing up the SDDC itself. The reality is, we can now have a hyper-converged deployment experience where the playbook of the deployment is driven by a consultant-developed design specification.For instance, our partners and our professional services organization has access to what we refer to as the SDDC Deployment Tool (an imaginative name, I know) (or SDT for short). This tool can automate the deployment and configuration of all the components that make up the software-defined data center. The following screenshot illustrates this: Today this tool deploys the SDDC elements in a single use case configuration.In VMware’s Professional Services Engineering group we have created a design specification for an SDDC platform. It is modular and completely instantiated in software. Our Professional Services Consultants and Partners can use this intellectual property to design and build the SDDC.I believe our next step is to architect our solution design artifacts so the SDDC itself can be described in a format that allows software―like SDT―to automatically provision and configure the hardware platform, the SDDC software fabric, and the services of the SDDC to the point where it is ready for consumption.A consultant could design the specification of the SDDC infrastructure layer and have that design deployed in a similar way to hyper-converged infrastructure―but allowing the customer to choose the hardware platform.As I mentioned at the beginning, the SDDC is not just about technology, consumption and operations: it provides the basis for a transformation in delivery. To me a good ogy right now is the 3D printer. The SDDC itself is like the plastic that can be molded into anything; the 3D printer is the SDDC deployment tool, and our service kits would represent the electronic blueprint the printer reads to then build up the layers of the SDDC solution for delivery.This will create better and more predictable outcomes and also greater efficiency in delivering the SDDC solutions to our customers as we treat our design artifacts as part of the SDDC code.Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.By Anand VaneswaranIn my previous post, I provided instructions on constructing a high-level “at-a-glance” VDI dashboard in vRealize Operations for Horizon, one that would aid in troubleshooting scenarios. In the second of this three-part blog series, I will be talking about constructing a custom dashboard that will take a holistic view of my vSphere HA clusters that run my VDI workloads in an effort to understand current capacity. The ultimate objective would be to place myself in a better position in not only understanding my current capacity, but I better hope that these stats help me identify trends to be able to help me forecast capacity. In this example, I’m going to try to gain information on the following:You can either follow my lead and recreate this dashboard step-by-step, or simply use this as a guide and create a dashboard of your own for the most important capacity metrics you care about. In my environment, I have five (5) clusters comprising of full-clone VDI machines and three (3) clusters comprising of linked-clone VDI machines. I have decided to incorporate eight (8) “Generic Scoreboard” widgets in a two-column custom dashboard. I’m going to populate each of these “Generic Scoreboard” widgets with the relevant stats described above.Once my widgets have been imported, I will rearrange my dashboard so that the left side of the screen occupies full-clone clusters and the right side of the screen occupies linked-clone clusters. Now, as part of this exercise I determined that I needed to create super metrics to calculate the following metrics:With that being said, let’s begin! The first super metric I will create will be called SM – Cluster LUN Density. I’m going to design my super metric with the following formula:sum(This Resource:Deployed|Count Distinct VM)/sum(This Resource:Summary|Total Number of Datastores)In this super metric I will attempt to find out how many VMs reside in my datastores on average. The objective is to make sure I’m abiding by the recommended configuration maximums of allowing a certain number of virtual machines to reside on my VMFS volume.The next super metric I will create is called SM – Cluster N+1 RAM Usable. I want to calculate the usable RAM in a cluster in an N+1 configuration. The formula is as follows:(((sum(This Resource:Memory|Usable Memory (KB)/sum(This Resource:Summary/Number of Running Hosts))*.80)*(sum(This Resource:Summary/Number of Running Hosts)-1))/10458576Okay, so clearly there is a lot going on in this formula. Allow me to try to break it down and explain what is happening under the hood. I’m calculating this stat for an entire cluster. So what I will do is take the usable memory metric (installed) under the Cluster Compute Resource Kind. Then I will divide that number by the total number of running hosts to give me the average usable memory per host. But hang on, there are two caveats here that I need to take into consideration if I want an accurate representation of the true overall usage in my environment:1)      I don’t think I want my hosts running at more than 80 percent capacity when it comes to RAM utilization. I always want to leave a little buffer. So my utilization factor will be 80 percent or .8.2)      I always want to account for the failure of a single host (in some environments, you might want to factor in the failure of two hosts) in my cluster design so that compute capabilities for running VMs are not compromised in the event of a host failure.  I’ll want to incorporate this N+1 cluster configuration design in my formula.So, I will take the result of my overall usable, or installed, memory (in KB) for the cluster, divide that by the number of running hosts on said cluster, then multiply that result by the .8 utilization factor to arrive at a number – let’s call it x – this is the amount of real usable memory I have for the cluster. Next, I’m going to take x, then multiply the total number of hosts minus 1, which will give me y. This will take into account my N+1 configuration. Finally I’m going to take y, still in KB, and divide it by (1024×1024) to convert it to GB and get my final result, z.The next super metric I will create is called SM – Cluster N+1 vCPU to Core Ratio. The formula is as follows:sum(This Resource:Summary|Number of vCPUs on Powered On VMs)/((sum(This Resource:CPU Usage|Provisioned CPU Cores)/sum(This Resource:Summary|Total Number of Hosts))*(sum(This Resource:Summary|Total Number of Hosts)-1))This formula is fairly self-explanatory. I’m taking the total space used for that datastore cluster and dividing that by the total capacity of that datastore cluster. This is going to give me a number greater than 0 and less than 1, so I’m going to multiply this number by 100 to give me a percentage output.Once I have the super metrics I want, I want to attach these super metrics to a package called SM – Cluster SuperMetrics.The next step would be to tie this package to current Cluster resources as well as Cluster resources that will be discovered in the future. Navigate to Environment > Environment Overview > Resource Kinds > Cluster Compute Resource. Shift-select the resources you want to edit, and click on Edit Resource.Click the checkbox to enable “Super Metric Package, and from the drop-down select SM – Cluster SuperMetrics.To ensure that this SuperMetric package is automatically attached to future Clusters that are discovered, navigate to Environment > Configuration > Resource Kind Defaults. Click on Cluster Compute Resource, and on the right pane select SM – Cluster SuperMetrics as the Super Metric Package.Now that we have created our super metrics and attached the super metric package to the appropriate resources, we are now ready to begin editing our “Generic Scoreboard” widgets. I will tell you how to edit two widgets (one for a full-clone cluster and one for a linked-clone cluster) with the appropriate data and show its output. We will then want to replicate the same procedures to ensure that we are hitting every unique full clone and linked clone cluster. Here is an example of what the widget for a full-clone cluster should look like:And here’s an example of what a widget for a linked-clone cluster should look like:Once we replicate the same process and account for all of our clusters, our end-state dashboard should resemble something like this:And we are done. A few takeaways from this lesson:In my next tutorial, I will walk through the steps for creating a high-level “at-a-glance” VDI dashboard that your operations command center team can monitor. With most organizations, IT issues are categorized on a severity basis that are then assigned to the appropriate parties by a central team that runs point on issue resolution by coordinating with different departments.  What happens if a Severity 1 issue happens to afflict your VDI environment? How are these folks supposed to know what to look for before placing that phone call to you? This upcoming dashboard will make it very easy. Stay tuned!!Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling. By Ray Heffer, VCDX#122, VMware EUC ArchitectBack in April 2012, I posted on my blog my original Horizon View network firewall ports diagram. Over the past two years, it’s been used widely both internally at VMware and in the community. Since Horizon 6 just recently released, I thought I’d create a brand new full size diagram to include Cloud Pod Architecture. This updated diagram contains a better layout and a new color theme to boot!  This image is 3767 x 2355 pixels, so simply click it to enlarge then ‘Save Image’ to get the full size HD version.You’ll notice the addition of VIPA (View inter-pod API) and ADLDS port 22389 which are both used for Cloud Pod Architecture. Bear in mind that between your View Pods, you will still require the usual Active Directory ports.Key Firewall Considerations for VMware Horizon 6For a full list of network ports please refer to the latest Horizon 6 documentation: https://www.vmware.com/support/pubs/view_pubs.htmlRay Heffer is an EUC Architect working at VMware and a double VCDX with both VCDX-DCV (Data Center) and VCDX-DT (Desktop). Previously part of the VMware Professional Services team as a Senior Consultant, Ray now works for the Desktop Technical Product Marketing BU at VMware. Ray joined the IT industry in 1997 as a Unix admin, before focusing on end user computing with Citrix MetaFrame and Terminal Services in the early days. In 2004 Ray joined an ISP providing managed hosting and Linux web applications, but soon discovered VMware ESX 2.5 (and GSX!) and passed his first VCP in 2007. Ray has many years of complex infrastructure design and delivery including the integration of VCE Vblock for both EUC and Cloud, and two highly successful 10,000+ user VMware Horizon View design and implementation engagements. This post originally appeared on Ray’s blog. Follow Ray on Twitter @rayheffer.By Ray Heffer, VCDX #122, VMware EUC ArchitectSince VMware Horizon View 5.2, there has been support for Microsoft Lync 2013. In fact when I say ‘support’, I mean that both Microsoft and VMware have developed the architecture that provides a great user experience. Prior to Horizon View 5.2, only VOIP phones were supported and there were bandwidth constraints that made this unviable and resulted in a poor experience for end users.For detailed information, see the VMware whitepaper on Horizon View 5.2 and Lync 2013, and take note of KB articles 2064266 and 2045726. In addition, Microsoft has a Lync 2013 technical resource page which covers the Lync 2013 VDI Plugin. If you’re new to Lync 2013 or VMware Horizon View, this post will provide you with an architecture overview of how Lync 2013 integrates with virtual desktops running with Horizon View 5.3.Architecture In the architecture diagram that I’ve sketched here (below), you can see two users (Bill and Ted) using a webcam and headset with microphone to talk to each other using Lync 2013. The user at the bottom is using a virtual desktop being accessed from a Windows client (PC or thin-client), which will be running one of the following: Windows Embedded Standard 7 with SP1, Windows 7 with SP1, or Windows 8 (Tech Preview). Microsoft hasn’t yet released a VDI Plugin for Linux or zero-client manufacturers. The virtual desktop (shown on the right) that Bill is using contains the Horizon View agent (which you’d expect) and the Lync 2013 client. When Bill launches the Lync 2013 client on his virtual desktop, it detects the Lync VDI plugin on his physical client machine and establishes a pairing over RDP or a PCoIP (virtual channel). RDP will work, but PCoIP is the recommended approach. At this stage you are required to enter the password again, but this can be saved to prevent it prompting every time.Any instant messaging is still sent between the Lync 2013 client on the virtual desktop and the Lync 2013 server, but when Bill establishes a video call with Ted, who is also using the Lync 2013 client, the audio/video is sent directly from Bill’s client device to Ted and NOT from the virtual desktop. The benefit of this is that the audio and video won’t be sent over PCoIP, consuming valuable bandwidth, and the user experience will be much better (or at least as good) as using the native client. Remember that the Lync 2013 client itself is still communicating with the Lync 2013 server, but a large proportion of the bandwidth required for audio/video is no longer being passed back over PCoIP. Troubleshooting If you have Microsoft Lync 2013 Server in place then implementation is relatively simple, but there are some things that can get overlooked.Here is a list of common troubleshooting tips:I’d love to hear your thoughts on Microsoft Lync 2013 and/or your experiences using it with Horizon View so feel free to comment below!Ray Heffer, (VCDX #122), VMware EUC Architect, joined the IT industry in 1997 working with Unix and focusing on Microsoft server and Cisco networking infrastructure. While working for an ISP in 2005, Ray discovered VMware ESX 2.5 (and GSX!) and started migrating hosted workloads and discovering the joys of storage optimization, virtual networking and security. Achieving his first VCP in 2007, Ray has since specialized in VMware virtualization and has collected both VCP and VCAP certifications in data center (DCV) and desktop (DT) along the way. In addition, Ray holds ITIL v3, and MCSE certifications and today he works for VMware as an End-User Computing Architect in the Technical Enablement team. This post originally appeared on Ray’s blog. Follow Ray on Twitter @rayheffer.By Gary Hamilton, Senior Cloud Management Solutions Architect, VMwareEvery day, companies like Square, Uber, Netflix, Airbnb, the Climate Corporation, and Etsy are creating innovative new business models. But they are only as innovative as the developers who build their applications and the agility of the platform on which those applications are delivered.By using Pivotal CF, an enterprise PaaS solution (powered by Cloud Foundry) that is constantly delivering updates to and horizontally scaling their applications with no downtime, companies can develop applications at the speed of customer need/demand, not inhibited by infrastructure.Businesses, now more than ever, have a greater need for agility and speed–a solid underlying platform is the key to delivering faster services.We all consume software as a service (SaaS) like Gmail every day via our laptops, smart phones, and tablets. Platform as a service, or PaaS, acts as the middle layer between the applications and the infrastructure (that is compute, storage and network). If everything is operating smoothly, the actual infrastructure on which software is built is something that few users even give a second thought. And that’s how it should be.The concept and value of infrastructure as a service (IaaS) is easy to understand and grasp. Being able to consume virtual machines (VMs) on demand, instead of waiting days or weeks for a physical server, is a tangible problem. Platform as a service (PaaS) is different. Delivering VMs with middleware installed is how PaaS solutions have traditionally been presented, but isn’t that a software distribution and automation problem?And therein lies the problem. We have neither identified the real problem, nor the real end user to whom PaaS is a real solution, and it is therefore difficult to quantify the real value proposition of PaaS.As stated earlier, PaaS is intended to provide that middle layer between the infrastructure and the application. PaaS should be providing services that are leveraged/used by the application, enabling the application to deliver its services to its end user, abstracting that middle layer and the infrastructure. When we think about PaaS in these terms, we begin to hone in on the real problem and the real PaaS consumer: the developer.However, the problem the developer faces is how to plug new services into an application on demand as quickly as he/she is able to develop the new application. Developers are neither DBA or Hadoop experts, nor are they experts in high availability (HA) and resilience, they are not security experts nor are they scaling and capacity management specialists.With PaaS, developers can use services that meet functional and non-functional requirements on demand: they should be plugged right in with a variety of databases on demand. (Think of it as any database, elasticity, security, HA, or ytics on demand.) The possibilities are exciting! PaaS essentially brings in an application with business services wrapped around it and applications are enterprise-ready at the click of a button, versus waiting weeks or months to complete integration and performance testing.The PaaS model is a bit different as it means consultants support a developer who then supports a business. The conventional cloud solutions are aimed at the end user or a customer, whereas now the focus is on the applications. As far as IT goes, the focus is shifting toward innovation away from the mentality that IT is about cost savings.IT is No Longer About Saving MoneyThat’s right, IT is no longer about saving money. Sure, saving money is important, but that’s not where the real value is. The value is in new services that create new revenue streams.Just look at the innovative companies I listed above. To succeed, they had to recognize that developers are the engine of innovation and innovation helps to drive revenue.To help educate customers, consultants need to assume the role of educator so companies can understand how to become more agile in the face of a changing industry.The problem is, many businesses see IT as a cost center and think that spending on IT isn’t money well spent. Businesses need to innovate to grow revenue. PaaS resonates with those innovative companies: they recognize that a fast and agile platform can only help them innovate and deliver new services faster. And, in turn, that leads to profitability.Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.If you are virtualizing an SAP environment running business-critical applications, chances are these questions will sound familiar: Am I optimizing my SAP virtualization for the maximum benefit? What measures should I take to avoid negative business impact when running SAP production workloads on the VMware virtualized platform?Luckily, VMware Consulting Architect Girish Manmadkar recently shared his advice on this topic.To make sure you are designing and sizing your infrastructure for optimum business benefit, Girish suggests two new questions to ask yourself, your IT organization, and your vendors.1. How will this environment need to scale?2. Am I sizing my environment to support 3-to-5 years of growth?When you understand the needs outlined by these questions, you can then work with hardware vendors, as well as your VMware and SAP teams, to find the best solution.From an operational standpoint, there are also efficiencies within the SAP environment once it is virtualized that you want to be sure to take advantage of.1. Scaling out during the month-end and quarter-end processing is a snap compared to the hours it can take otherwise.2. Products like vCenter Operations Manger help make sure your SAP basis admin and VMware admin are always on the same page, making it far faster and easier to troubleshoot the environment.3. You’ll be able to provide the operations team with 24-hours monitoring of the entire SAP virtual infrastructure, allowing for a proactive approach to minimize or eliminate downtime.Check out Girish’s video, above, for more details.Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hand-on experience with various SAP and VMware products, including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.Taken from Chinese philosophy, “tao” refers to a “path” or guiding principal. With the exciting range of new technologies available (such as software-defined storage and network virtualization), it’s important that IT organizations establish that over-arching strategy for integration with the existing architecture.In this short video, Wade Holmes (VCDX #15 and VMware Staff Solutions Architect) outlines the “VCDX Way,” which emphasizes an integration plan that is closely mapped to business priorities.How are you approaching the integration of new technologies? Are you mapping updates to business strategy? We’d love to hear about your experiences in the comments. Sharing best practices and insights from thousands of customer engagements that have delivered results - positive, tangible and material - to IT organizations and their business.



Jun 6Posted by prehospitalresearcherCoronary artery disease (CAD) is one of the leading causes of death and disability in the Western world and accounts for approximately 40% of all death in Australia annually, so therefore the concept is essential for paramedics to understand. Survivors of an acute myocardial infarction (AMI or heart attack) are 1.5-15 times increased mortality risk when compared with to normal people despite a 30% reduction in fatalities from AMI in the past three decades. It is estimated that 1 in every 5 to 6 adults in Australia have established CAD, and it is believed that CAD begins while in utero with infant autopsies revealing CAD in its early stages.Some key definitionsCAD refers to the presence of atherosclerotic changes in the wall of the coronary arteries and the potential to form thrombosis due to plaque rupture leading to the disruption of blood flow to the myocardium.Arteriosclerosis is the hardening of the arteries.Atherosclerosis is the most common form of arteriosclerosis and refers to the plaque build-up in the coronary arteries.  Healthy endothelium Endothelium is the monolayer inner lining of the entire vascular system consisting of simple squamous endothelial cells and covers approximately 700m2 and weighs approximately 1.5kg. Endothelium permits diffusion, filtration and facilitates transport across the membrane. Endothelium is key to understanding atheroma which will be discussed in a little bit. Functions of the endothelium The endothelium has a non-thrombogenic surface (prevents thrombus/clot formation) which produces prostaglandin derivatives such as prostacyclin (inhibitor of platelet aggregation or platelets sticking together) and a surface covered in heparin sulphate which prevents clotting. The endothelium secrets endothelium-derived relaxing factor (EDRF), a potent vasodilator as well as nitric oxide. This enables the endothelium to assist in managing the fine balance between vasodilation and vasoconstriction. The endothelium also secrets a fibrinolytic agent (which decreases the breakdown of fibrin, which is involved in the clotting process). The endothelium also secretes cytokines (are small proteins produced by the immune system that carry signals locally between cells) & adhesion molecules plus vasoconstrictors such as angiotensin II, serotonin & platelet-derived growth factor (PDGF) What the endothelium regulatesAn atheroma is an accumulation and swelling in the artery walls that is made up of cells (mostly macrophage cells) or cell debris that contains lipids (cholesterol and fatty acids), calcium and a variable amount of fibrous connective tissue. Birth of plaque Excess blood LDL (Low-density-lipoprotein or the bad form of cholesterol) injures the endothelium. They lodge under the endothelium and set off an inflammatory response. Macrophages ingest oxidised LDL and become fat laden and frothy (they become foam cells). This forms a fatty streak and is the earliest form of atherosclerotic plaque.Plaque progressionInflammatory mediators cause proliferation of smooth muscle cells in the tunica media and accumulation of collagen fibres in the tunica media occurs. This results in a fibrous cap growing over the plaque.AnginaAngina is essentially the next step of atheroma formation or more precisely angina is caused by atheroma formation. Angina is a temporary state of myocardial demand supply mismatch, where the oxygen demand of the heart is not met by the supply due to the narrowing of the coronary arteries from atheroma formation. This results in the myocardium becoming ischemic but infarction does not occur. Angina is usually relieved by rest or nitrates and generally lasts less than 10 minutes.Unstable angina or UAIn a similar fashion to angina, UA is caused by atheroma formation. More specifically UA is caused by a small rupture in the fibrous cap of the atheroma, however there is a minimal occlusion of the artery but does cause a mismatch between oxygen demand of the myocardium and oxygen supply. UA is angina that has sustained a change in pattern such as a new onset, becomes more frequent, easier to provoke, difficult to relieve, occurring at rest for prolonged spells, reoccurrence of angina post AMI and PCI (Percutaneous Coronary Intervention such as stent). In UA is no evidence of myocardial injury or necrosis and mortality for UA ranges for 2-5% at 30 days and 4-15% at 1 year.Plaque rapture Inflammatory substances secreted by the foam cells weaken the fibrous cap by digesting the collagen matric and damaging the smooth muscles. Foam cells may also display tissue factor, a potent clot promoter. Exposure of the necrotic lipid core to the blood activates the formation of a thrombus. Thrombosis results from physical disruption of plaque and exposure of the lipid pool to the blood and clotting factors, further enhancing thrombus formation. 2/3 AMI’s (thrombotic in origin) are caused by a fracture of the plaque’s fibrous cap resulting in occlusion. Whilst 1/3 is caused by superficial erosion of the intima (more common in women and potentially explains why they are more likely to suffer from a ‘pain-free’ AMI). The vulnerability of a plaque is determined by the size of the lipid core and the fibrous cap, in addition to the number of inflammatory cells (foam cells) that become trapped under the fibrous cap.Infarction occurs when the thrombus completely occludes a coronary artery, resulting in massive myocardial necrosis due to interruption of myocardial blood supply (therefore oxygen supply). When an infarction occurs there are three distinct area/zones of tissue that constantly evolve and are not static;AMI is also known as a ST segment Elevation Myocardial Infarction (STEMI) as elevation of the ST segment on an ECG is a common sign of AMI. The theory behind ST elevation will be discussed a bit further on. The new terminology that is going to be used for STEMI is STEACS or ST segment Elevation Acute Coronary Syndrome, as the only definitive test for an AMI is a blood test.Classic presentation of an AMIA classical presentation of an AMI is sub-sternal pain or discomfort that is described as chest pressure, heaviness, tightness, fullness, squeezing or burning. It is less frequently described as knifelike, stabbing or sharp (though up 15% of AMIs will present with this description of the pain or discomfort). The pain/discomfort is typically described as severe. The pain will generally radiate down the arm, up into the neck or the jaw and/or the infrascapular region. Radiation to either both arms or to the right arm dramatically increases the likelihood of AMI and is considered to be the only predictable clinical indicator of AMI. Other signs and symptoms associated with AMI include nausea, vomiting, dyspnoea, palpitations and diaphoresis.CW’s Hints: Up to 15% of AMI’s will present with pain that is classically considered to be pleuritic in nature that worsens on movement and inspiration. Credit to Matthew Johnson for the image belowAcute coronary syndrome (ACS) is a continuum of clinical manifestations, ranging from asymptomatic atherosclerosis and stable angina, to unstable angina, AMI, STEMI, nSTEMI and sudden death. All share a common pathophysiological process characterised by coronary plaque disruption with superimposed thrombus formation, ranging from superficially adherent thrombus interrupting coronary blood flow to total occlusion compromising myocardial perfusion leading to ischaemic necrosis and eventually infarction. Traditional philosophy of separate angina states and AMI is too specific as the underlying pathophysiology may actually be the same in each state. The concept of ACS relies upon an understanding of the common underlying pathophysiology of atheroma formation and plaque disruption.An example is: A patient presenting with unstable angina which is relieved with nitrates and rest is now understood to have the potential to be forming a thrombus due to plaque rupture and has the greater likelihood of developing myocardial ischaemia and even AMI. Traditional beliefs centred on the growth of the atheroma until luminal blood flow was reduced by 70%, leading to symptoms of angina. This is now not the case.AnginaAngina is where a plaque causes a partial blockage of a coronary artery.Unstable angina or UAUA is episodic angina that has had a change in its normal presenting pattern. This includes a new onset of angina that has been controlled previously, more frequent occurrence of angina, angina that has become difficult to relieve, angina becomes easier to provoke, angina that occurs at rest for prolonged periods of time, or the reoccurrence of angina post AMI or percutaneous coronary intervention (PCI or more commonly known as a stent). With UA there is no evidence of myocardial injury or necrosis. The non-occlusive thrombus of UA can become transiently or persistently occlusive. Finally, depending on the underlying cause and duration, the presence of collateral blood vessels and the area of hypoperfused myocardium, myocardial injury and necrosis can occur (AMI).ECG changes: In UA ECG changes that could occur include downward ST segment depression, T-wave inversion, a combination of both of the previous two changes, or no ECG changes at all.Non-STEMI or nSTEMInSTEMI is a clinical syndrome that represents up to 25% of AMI’s that occur, with an nSTEMI being characterised as patient presenting with symptoms of myocardial ischaemia and necrosis with a positive troponin but do not develop ST-segment elevation (5). In the prehospital setting distinction between UA and nSTEMI is generally not possible as the two are distinguished by the presence of a positive troponin. The most common cause of an nSTEMI is a non-occlusive thrombus, with 25% of all nSTEMI’s going on to develop ST-segment elevation on their ECG’s, with the remaining 75% staying as nSTEMI’s.ECG changes: In nSTEMI, ECG changes that could occur include downward ST segment depression, T-wave inversion, a combination of both of the previous two changes, or no ECG changes at all.STEMISTEMI is when a thrombus that has formed a coronary artery (ies) becomes completely occlusive resulting in myocardia ischaemia and necrosis due to the disruption in blood flow to the myocardium. The difference between an nSTEMI and STEMI is the development of ST-segment elevation with a positive troponin (see below for mechanism of ST-segment elevation). The ECG criteria for a STEMI is ST segment elevation in two or more anatomically contiguous leads and the ST elevation must be >1mm or greater in limb leads or >2mm in the chest or precordial leads of a 12-lead ECG; or there is the presence of a new left bundle branch block (LBBB).ECG changes: include the presence of ST-segment elevation, T wave inversion, hyperacute T waves and development of ‘Q waves’Mechanism of ST segment elevationST segment elevation is due to the fact that the myocardial cells that have infarcted (or died) are unable to maintain resting membrane potential in diastole. So they never return to their resting state. Essentially there is a shift in resting membrane potential from 90mV to 70mv (more negative extracellular).Stages of infarction (ST segment elevation infarction)The early phase of infarction is characterised by tall peaked T waves (or hyperacute T waves) and you may also see ST elevation during this phase. This is the end of the thrombolysis window. ST elevation occurs as the injured myocardium repolarises earlier than the healthy tissue. The vector or current of injury is directed away from the injured tissue, towards healthy tissue. ST elevation lasts a variable period of time but usually begins to evolve within the first 24hrs (1).The evolving phase is said to occur when the combined patterns of infarction, injury and ischaemia are present. This is typified by the presence of marked ST segment elevation, Q wave development and later on T wave inversion. This typically occurs over a period of hours to days (1).The phase of resolution occurs over a period of days to weeks and even months and is typically characterised by the return of the ST segment and T waves to normal. However the presence of the pathological Q waves is permanent. T wave inversion may still be present until weeks after infarction has occurred, but may persist for months after infarction.Risk factors for Acute Coronary SyndromeSmoking- Increases platelet activity & catecholamine levels, alters prostaglandins & and decreases High density lipotiensDiabetes- causes endothelial dysfunction & and decreases thromboresistence, increases platelet activity thus accelerating atherosclerosisHypertension- Angiotensin II (potent vasoconstrictor) can cause intimal inflammation & endothelial expression of adhesion molecules. ACE inhibitors may have a role in retarding inflammation as well as BP. Hypertension increases the presences of collagen and elastin; increases endothelial permeability; and increases platelet & monocyte accumulation.Obesity- predisposes patient to insulin resistance & diabetes. Contributes to atherogenic dyslipidaemia due to conversion of free fatty acids to VLDL, and lowering HDL levels. Adipose tissue also synthesizes cytokines, promoting inflammation & thermogenesis.Family history- predisposition to hyperlipidaemia, endothelial dysfunction, diabetes, etc.Culture- lifestyle & diet plus genetic grouping predispose certain cultures to CAD e.g. Sri Lankan.Alcohol consumption- 5-7 glasses/week decreases riskGenetic disease- Buerger’s syndrome (thromboangitis obliterans), infiltrative connective tissue diseases (amyloidosis/sarcoidosis)Infection- Chlamydia pneumoniae & CMVDrug use- Use of cocaine can lead to coronary artery spasm and AMIC-reactive protein- Inflammatory markerThe clinical bottom lineA tender chest does not rule out acute coronary syndrome! Chest wall tenderness suggests that ACS is less likely but does not effectively rule out the diagnosis of ACS. The first ECG in ACS has low sensitivity for AMI detection with rates varying between 13% and 69%, so a normal initial ECG does not rule out AMI with the only definitive test being troponin and CK- MB. However, out-of-hospital ECG does reduce call-to-balloon and door-to-balloon times significantly.1.            Frantz K. Concepts in Advanced Electrocardiology and Cardiology. Melbourne2011.2.            Thrombosis Advisor. Bayer Schering Pharm AG; 2008 [cited 2012 29/12/2012]; Available from: www.thrombosisadvisor.com.3.            Martini N. Fundamentals of Anatomy & Physiology. 8th ed. Sydney: Pearson; 2009.4.            heuser J. AMI bloodtests Wikipedia; 2013 [cited 2013 12/01]; Available from: http://en.wikipedia.org/wiki/File:AMI_bloodtests_engl.png.5.            Guy JS. Pharmacology for the Prehospital Professional. Missouri ,USA: Mosby JEMS, Elsevier; 2007.6.            Tintinalli J, Kelen D, & Stapczynski J. Emergency Medicine: A comprehensive study guide. 6th ed. United States of America: McGraw-Hill; 2004.7.            Sanders M. Mosby’s Paramedic textbook. Missouri: Elsevier Mosby; 2007.8.            ARC. Guideline 14.2- Acute Coronary Syndromes: Initial Medical Therapy. Australian Resuscitation Council; 2012.9.            ACTAS. ACT Ambulance Service Clinical Management Guidelines Draft Edition. 2012.10.         AV. Clinical Practice Guidelines for Ambulance and MICA Paramedics. Melbourne: Ambulance Victoria, 2012.11.         RANDOMISED TRIAL OF INTRAVENOUS STREPTOKINASE, ORAL ASPIRIN, BOTH, OR NEITHER AMONG 17 187 CASES OF SUSPECTED ACUTE MYOCARDIAL INFARCTION: ISIS-2. The Lancet. 1988;332(8607):349-60.12.         Henrikso CA, Howell EE, Bush DE, Miles S, Meininger GR, Fridedlander T, et al. Chest Pain Relief by Nitroglycerin Does Not Predict Active Coronary Artery Disease. Ann Intern Med. 2003;139:979-86.13.         Matok I, Gorodischer R, Koren G, Sheiner E, Wiznitzer A, & Levy A. The Safety of Metoclopramide Use in the First Trimester of Pregnancy New England Journal of Medicine. 2009;360:2528-35.Christian.As always, the views expressed on this blog are my own and do not reflect those of my employer nor the University that I attend Posted in Cardiac, Medical conditions, Notes 1 CommentTags: ACS, acute coronary syndrome, chest pain, FOAMed, Paramedic education, pre-hospital, Prehospital Blog at WordPress.com.





#Contact US #Terms of Use #Privacy Policy #Earnings Disclaimer