Eleven Cool Things Our Clients Are Liking About Our Cloud-Based GIS Setups

Two weeks ago, one of our hosting clients emailed our support team looking for advice and ideas about a short term need to run a geoprocessing model.  They were concerned about whether running the model might disrupt the performance of some of the other applications delivered from the server they planned to use.  After some brief discussion about their needs, our team spun up a new Cloud-Based GIS server for them based
cloud-based GISon a configuration template that we have stored for our own occasional needs.  Within less than an hour-and-a-half, they had logged on, transferred data, and begun executing their model on the new system.   After three days, they had completed their geoprocessing, QC’d all the output and were ready to retire the system.   We spun it back down.  They got their work done with no performance impact on their application server and a modest and discrete cost increase for their temporary use of the specialized server resource.  Our support team got two thumbs up and several exclamations to the effect of: “That was so cool, I didn’t even know we could do that!”

There have been a few of these pleasant “aha!” moments lately, so I figured it might be time to ask our technical team for a quick hit list of other things our existing and prospective hosting customers might want to know about our cloud-based GIS resources.  Here you go…

 

Cool Thing #1: Spin ‘em up and then retire them

Engage and retire server resources as and when necessary.  As shown in the example above, short term or occasional needs for geoprocessing, tile cache creation, running massive regressions, ... impressing your friends with your amazing server… can be met by spinning up and using a server resource for a defined period.  You get the resource you need when you need it, but don’t pay for an otherwise under-utilized resource during periods of low or no demand.  This is the “elastic” is part of that “elasticity” of cloud resources you’ve heard about.  It’s real.

Cool Thing #2: Keep a recipe folder of your favorite server setups

Although things have certainly improved, there is still some effort and time that goes into configuring an optimized GIS, database, or processing server that’s set to do specialized tasks.  If you think it might be useful to have a vanilla/clean slate system to roll back to, or to add into your line up at some future point, we can very easily create a template based on one or more of your servers once it’s been setup to your liking.  We basically just clone your server, duplicating it using a short term allocation of virtualized resources (vRAM, storage, etc).  Once the cloning process is complete (it’s quite fast; usually 1-3 hours depending on the volume of data that may be included), we can then release the allocated resources and just keep the template that is created by the process.  We’ll keep as many of these templates as you want – they’re like recipes for your favorite cloud-based GIS server setups.  You can point at them at any time and say “make me one of those!”

Cool Thing #3: Speedy resource recruitment and scaling

Especially if you’ve worked with us to plan out/anticipate some of your potential needs, you’ll be in a good position to quickly add server resources as demand requires. Application servers, database servers, failover web servers, geoprocessing workhorses…they can all be rapidly integrated into your ecosystem of cloud-based resources and can expand out in a v formation (or any other formation).  Again – this is that  “elasticity” of the cloud you’ve heard about.  Bring them up, put them to bed — optimize your resource utilization so performance is sustained and you’re not paying for what you’re not using.  It’s pretty neat.  Customers like this (btw, I’m the one they talk to about money).

Cool Thing #4: Hang up your sys admin hat

GIS folks are chronically needy when it comes to permissions and privileges and settings on server systems. In the best of circumstances, GIS people are work-makers for IT staffs in their organizations; more typically they are the source of daily consternation because of all the pleading for accommodations that defy what IT folks want or know how to give.  One result of this is that GIS people end up with their own servers to administer but they often lack the training for how to properly do it.  If you’re fatigued by the fight with IT and/or you’re tired of playing a sys admin on TV, you may find relief in our cloud-based GIS options.  We can get you into a sweet, state-of-the-art system that meets all your little GIS-y requirements and we can keep your setup humming along.  We know where you’re coming from, we know where IT is coming from…there’s therapy for both of you over here.

Cool Thing #5: Redundancy and recovery with virtualized data volumes, on and off-site backups, and multi-regional options…

Typically the quest for redundancy and failover isn’t the thing that motivates our clients to move to the cloud, but it is a side-benefit they enjoy almost universally; many folks are breathing easier because it’s no longer out there on the to-do list.  As described above, you can get redundancy and failover in the form of multiple, mirrored servers.  You should also know that even with a single server, your data, at the file level, is stored on highly redundant virtualized storage resources that have awesome performance statistics.  We also have lots of different backup plans that we have tailored to the needs of our customers.  These can range from local backups, network attached storage backups, to off-site backups, and retention policies may be adjusted to your needs and preferences.  Also, if your business continuity plans require multi-regional failover or redundancy, we can provide this, mirroring resources in two or more data centers that have parity in their awesomeness with our primary facility. Ask and ye shall receive more info about the particulars.

Cool Thing #6: Easier upgrades…

Cool things #1 and #2 are pre-requisites for this cool thing.  You’ve read them?  Well then, here’s how upgrades can happen.  We either clone your existing server (the one targeted for software upgrades) or we spin up a new machine based on a selected recipe.  Now you have your production system (still chugging along) and an identical or very similar system standing beside.  We organize who is doing what and execute all the upgrade tasks accordingly (plan, implement, test, etc).  Once the upgraded system has been fully vetted and equipped with all necessary data and resources, it can slip into production….after a short period of verification, we can then “turn down” your old production system.  People that are used to the stress and headaches of upgrading complex GIS software on a single production system think this approach is really cool.  They tell us “this approach is really cool!”

Cool Thing #7:  Give me speed, give me space… 

Both our private and multi-tenant cloud systems are based on allocations from banks of virtualized resources.  As your memory and drive space needs fluctuate, we can quickly add or remove these resources from your system.  Map services, too, can be configured to duplicate and spool up, or to spin down and retire based on fluctuations in demand.

Cool Thing #8: 24×7×365 monitoring and staffed network operations center…

We have very customized monitoring and incident escalation plans with many of our hosting customers.  We can offer as much oversight of your systems and resources as you need.

Cool Thing #9: Operating instead of capital expense…

This one may not be quite as universally adored, as it can present a bit of an accounting paradigm shift, but others have found it very significant and beneficial to realize that they no longer have to face the political budget battle related to massive capital investments in new server systems every x number of years.  Cloud-based GIS can function (and be accounted for) much more like a utility than may have been the case with your old server setup.  You may find benefit in moving our cloud-based GIS services into operations.

Cool Thing #10: Multi-tenant and private cloud options (including HIPAA and HITECH compliant clouds)....

Without going into all the tech detail, suffice it to say that our tech team is excited to have you know that our cloud configuration options are extensive and can include multi-tenant only, private cloud only, or hybrid setups if your needs dictate.  We are also capable of configuring private clouds that satisfy the operational, administrative, technical and physical security controls necessary to meet the requirements for HIPAA and HITECH compliance.  We love helping our customers design forward-looking systems that are practical for their current needs and that scale to their ambitions.

Cool Thing #11: All these services come with our commitment of excellent, prompt, human support provided by people who know you and your setup

 

That’s right, our cloud-based GIS services GO TO 11!   If you want to talk to through any of these ideas and see how they may apply to your situation, give us a call 866.370.4278  or email: info@gartrellgroup.com 

 

Data Shame, Big Data, and the Need to Tell a Story

Final edits are just going in on a Database Foundation Plan for the Oregon Department of Land Conservation and Development (DLCD). We are tentatively emerging from the thickets of statutory law, plan amendment procedures, urban growth boundary expansions, appeals, hearings…. Before burrowing into the next project and it’s challenges, it’s refreshing to take a moment above ground and reflect on our work.
If you go with the marketing, ours is the age of ‘big data,’ and discovering patterns, trends, and critical relationships in behemoth and organization-straddling data sets is our grail.  In this context, I find that ‘data shame’ is a rampant if little acknowledged condition that extends into the recesses of many companies and agencies.  I don’t have statistics to share, but I know that this vexing malady is experienced in varying degrees by many people who must rely upon particular data to inform decision making and to direct actions that have real consequences.  Expectations are high, hype is unchecked, and many people endure the daily angst of working with data that doesn’t measure up to their needs.

In projects where our firm has been brought in to help people lasso their data, we have come to expect the moment of pause, then followed by apologies and looks of mild embarrassment, just before our clients reveal the uncomfortable secret of their data’s deficiency… and the workarounds, assumptions, and wild guessing that they are forced to do in order to get work done despite imperfect information.  Most people tend to think their data is the worst we will have seen.  “Can you believe this?”  “Have you ever seen anything like this?”

The answer is “yes.”  The day is yet to come when someone sits us down and shows us a perfect data system with perfect content.  The thing is, data could always be better.  And helping define and get to better is what we do.

One unique element of the DLCD engagement has been the bright light that project members were readily willing to focus on their data issues from the get-go.  This project involved designing a new data platform for the organization, and we were able to start at a jog, because there was never any need to coax or cajole information about data challenges from the stakeholders with whom we met. Perhaps this group had already been through the data shame therapy?  Or maybe they were so ready for change that they just didn’t care anymore ;-)? Whatever the reason, it was a little unusual to so promptly and directly get to identifying the problems that needed fixing.  As the end of the project has coincided with the New Year, I’ve been thinking that, for this client, this work has a sort of New Year’s resolution aspect to it.  There is a strong sense of commitment to change, and I’m gathering that it is now being accompanied by a growing sense of optimism that the changes they seek are within reach.

While the DLCD’s data may not qualify as ‘big data’ in terms of shear volume, it streams in from a broad variety of sources and has a reach extending to all the coves and corners of Oregon. As we have learned more about the work of the agency and its partners, we have come to understand that time-informed analysis of patterns and trends in land use, and exploring relationships among complex, amalgamated data is critical for the agency to be able to fulfill its mission.  One stakeholder concluded her comment about a database by saying, “we need a better way to analyze and tell the story of land use in Oregon.”

We feel good about the data platform that the DLCD will be implementing based on the design work we have done together.  The design features a rich integration of time and place. The designs resulting from this project aren’t just garden variety databases — they’re uniquely designed with sensitivity to the client’s current and envisioned needs and goals, and they include some novel elements created in response to the specific and unique challenges that stakeholders brought to light. The design will provide a great foundation from which modern, efficient, and user-focused information management tools will arise.  These folks will have the data infrastructure to capture, curate, store, search, share, analyze, and visualize their information.  This is going to be the platform to support the DLCD’s own brand and style of storytelling.

So, this has been one of those jobs where the promise of meaningful change seems real.  Things can be different.  They will be different. These folks made their resolutions and stuck to them. There’s no shame in that!