Tech Trends Archives

Digital Lessons From a Dying Industry

Hand in sea water asking for help. Failure and rescue concept.Yes, some people still read the print version of newspapers. So when the Boston Globe botched the move to a new delivery provider and left us paperless for the better part of a week, it caused a major disturbance in the force for my significant other.

Mistakes do happen, of course, although after 143 years you have to wonder what the Globe doesn’t know about home delivery. But it was its botched response to the delivery problems that holds lessons for anyone trying to succeed in this “digital” age (however you define it.)

Here are four digital must-haves I see my clients talking about, how the Globe failed to achieve each and some lessons for the rest of us.

  • Anticipate, don’t just meet, customer expectations: Proactively apologize for failures, tell the customer what compensation (if any) they’ll receive, but most importantly tell them when their problem will be resolved. It took four days for the Globe to even tell us we’d get a credit for the undelivered papers, and no word on a solution except that delivery problems should ease “within the next few weeks.”
  • Provide a seamless, personalized experience across service channels such as phone, Web and mobile apps. Nothing screams “clueless” like endless waits on customer service lines and Web sites that crash under the flood of complaints. When Netflix can instantly stream video to my phone, why couldn’t the Globe buy a bunch of cloud capacity to keep its customer service site up? The Globe showed it’s neither effective in its old business model (delivering physical papers) nor its new online, digital model.
  • Provide a personalized customer experience that puts the customer’s needs, not what you want to sell them, at the center. The service rep I finally got on the phone not only couldn’t tell me when delivery to my street would resume, she didn’t seem to hear my question.  How about a Web site showing, in green, yellow and red, what areas will see their papers the fastest? Or even better, email or text alerts with updates on when service will resume to a subscriber’s street?
  • Recognize the customer is in charge and act accordingly. With its late and clueless communication, the Globe seems to have forgotten its customers even exist, much less understanding and meeting their needs. Fixing this doesn’t require sophisticated Big Data social media analytics, but just common sense and putting yourself in the customer’s shoes. For example, if you were a subscriber to a daily newspaper would you be satisfied hearing  that delivery will resume sometime in the next few weeks?

I say all this more from sadness than anger, having admired the Globe for most of my life. It takes strong, healthy news outlets to do investigative reporting like the Globe’s uncovering of sexual abuse in the Catholic Church as portrayed in the movie Spotlight.) Having already been crippled by the Web, the last thing the Globe needs is to finish the job off through its own incompetence.

For the rest of us, it’s a shot across the bow and a heads-up to put the systems and processes in place to respond like a true digital organization when – not if – we have big and unexpected customer service issues.

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Do We Need An Edward Snowden for Security?

hackerSome people are grateful to former National Security Agency contractor Edward Snowden for revealing massive detail about our government’s intelligence activities. Others think he’s a traitor who’s harmed our national security.

One thing you have to admit: He’s gotten us all thinking about the proper balance between privacy and national security. He’s also changed behavior and attitudes, judging by recent Congressional moves to curtail (for the first time since the 2001 terror attacks) the government’s ability to monitor citizens’ phone records.

Snowden changed the game by fearlessly (or recklessly, depending on your viewpoint) tearing away the veils of secrecy to reveal something he felt was endangering his fellow citizens. Do we also need an Edward Snowden to expose the number, and severity, of security breaches to finally force CEOs and CIOs to make security a top priority?

Denial on Denial

One argument in favor is the tendency of corporate management to focus on security in the wake of a highly publicized attack, but then quickly lapse back into complacency.

It’s easy to shrug this off as clueless C-level executives. But software developers and network administrators who should know better are equally to blame. A chief technology officer at a global IT services firm recently told me programmers still routinely fail to build in protections against common attacks such as buffer overflows.

Security experts routinely say 40 percent or more of successful attacks exploited security vulnerabilities that have been known about for years, and could have been prevented by following known, straightforward processes such as patching software and turning off unused services. Even when security officers or vendors quantify the risk vs. the cost of security, management will often vote for “I’ll accept the risk” rather than pay more for security.

Needed: Harsh Light of Disclosure?

In an era when software controls critical infrastructure such as power plants and dams, medical devices such as medical pumps, and aircraft (one of which was brought down by incorrectly installed software recently) this lax attitude towards security could cost lives.

While an event such as fires in the batteries in Boeing’s 787 prompts the FAA to ground the planes until the problem is solved, one security expert I spoke with recently complained there is no “Federal Power Security Authority” to force action if the national power grid were hacked. And in the absence of an outside authority, any company or government agency will always have more to lose than gain by fessing up to a dumb programming or network management mistake.

Such a government agency would, like the National Transportation Safety Board for aviation and rail accidents, be responsible for an impartial review and disclosure of all the facts to tell the public about the risks they face and what is being done to resolve them. After all, a dam that floods a river or two trains that collide due to a software failure kills people just as effectively as an airplane crash caused by a mechanical failure. The hidden dangers will only increase as billions of devices, ranging from self-driving cars to autonomous valves in oil pipelines, join the Internet of Things.

Given our tendency to act only after a disaster, creating such an outside “security review” agency (whether governmental or run by private industry) will probably require a horrific event. Could we get there more quickly if one or more Edward Snowden spills the beans, hurting companies and agencies in the short run but helping us all in the long-term by showing us how vulnerable we are and forcing corrective action?

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Content Cookbook #4: Containers

content sequence containers Over the last ten years or so, virtualization has taken over the world of X86-based servers. Creating pools of “virtual” servers on single physical machines slashed capital and operating costs for in-house data centers. It also let newcomers like Amazon Web Services rent out slices of their ginormous compute, storage and network resources to anyone over the Web, slashing computing costs and creating multiple species of cloud computing.

Today’s emerging buzzword is “containers,” which you can think of “virtualization light.” Rather than using a hypervisor to control complete virtual operating systems on the same CPU, containers run only the components of the operating system needed to run an application. That promises to cut costs even further, speed deployment and enhance security by allowing greater isolation among applications.

Docker has the biggest container (if you will) of mind share, with support from big names such as Google, Amazon and Microsoft, but is facing challenges from newer competitors like  CoreOS. Even as Docker builds out its own management and orchestration services, partners like Google with Kubernetes and Amazon with its EC2 Container service are building their own “platform” of container tools.

The container movement is a classic emerging market where customers are begging to be educated – and, in the process, let you shape the terms of the conversation. What content do you need to inform them, judge their interest and nurture them for a possible sale?

Step 1: Awareness/education

Assume the reader has never heard of containers and provide a concise, “why should I care” explanation. Compare containers to virtualization, highlighting the benefits (greater reliability, lower overhead) as well as the costs and possible risks (the need for new skills, fast-changing marketplace, unproven vendors.)

Provide a high-level or summary overview of how easy or hard it is to implement containers on various operating systems, such as Microsoft vs. Linux. Show your expertise and thought leadership by “framing” the container conversation. Are they the next big thing? Hopelessly overrated? A raw technology that needs to settle down before committing to it? What unique insights can you provide your prospects based on their size, vertical market, installed technology base, or tolerance for risk?

Offer this ungated (no registration) form and promote it aggressively through social media, paid adwords, and SEO. End it with a link to the second piece, for those ready to look at specific container products and services.

Step 2: Consideration

Drill further into the different flavors of containers from various vendors. Compare Docker’s “platform” approach to CoreOS’ focus on “just the container, m’aam.” Discuss specific implementation scenarios (say, hybrid cloud, or a mix of Microsoft Azure and Rackspace clouds) and what it takes to deploy and manage containers from various vendors on each of them. Discuss in detail whether, when and why containers plus virtualization does or does not provide better security than virtualization alone.

Your aim is to equip the reader to issue an RFP, or at least ask killer questions as they evaluate products.  How, for example, does a vendor provide for backup and restore? How does each platform isolate sensitive applications in a multi-tenant environment, where multiple customers share the same hardware, storage and networks? What are the “must-have’s” vs. “nice to haves” to look for management tools? What new skills will the customer need to handle containers?

Again, the more you focus this piece on the needs of your specific prospects the better. Sure, talk about the stuff you happen to do well but focus on being even-handed and knowledgeable. End with a gated tease to your third story, focused on those ready to buy.

Step 3: Evaluation/Implementation

Here’s where you show you’ve thought through the deployment and use of containers so thoroughly the reader absolutely, positively has to consider you.  Get wicked tactical and detailed. Tailor this very specifically to your prospects’ specific concerns, such as complexity and cost (if they’re small) to security and compliance (if they’re a bank) to scalability and management (if they’re a cloud provider.)

If in doubt, drum up some lists based on your real-world experience:

  • “Top seven mistakes our customers made with containers.”
  • “Eight questions to ask about your current environment before choosing a container platform.”
  • “Our five favorite open-source container management utilities.”
  • “Five easy ways to enhance container security.”

Consider rolling in case studies with specifics of your customers’ before and after environments, the time and cost required to implement containers, and of course the business benefits. Gate this with a request for basic contact info (if you haven’t already) or ask for more detail if you’re into progressive profiling.

If you have a marketing automation platform, you can of course score readers based on which of these pieces they read. Any or all of these can also of course be “re-purposed” into Webinars, videos or podcasts, or split into blog posts and Tweets.

Let me know what you’re doing content wise to promote containers, or what other technologies or services you’d like to see a sample template for.

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Would You Trust Your Directory to the Cloud?

directory as a serviceIn the mad rush to the cloud we’ve already got SaaS (software, security or storage as a service), PaaS (platform as a service,) IaaS (Infrastructure as a Service), CaaS (communications as a service) NaaS (network as a service), Maas (monitoring as a service) and even “XaaS” (everything and anything as a service.)

What’s next? Directories, those cumbersome-but-necessary databases of users, hardware and other network resources companies use to manage who gets access to which corporate services. At least that’s the vision of JumpCloud, which is extending its cloud-based data center management capabilities with a Directory As a Service (Daas.)

JumpCloud’s pitch is that it’s way too expensive and complicated to host and manage directories in-house, especially Microsoft’s Active Directory (AD). The rival Lightweight Directory Access Protocol has its own complexities, says Rajat Bhargava, JumpCloud’s President and CEO, requiring extensive knowledge of the Linux kernel and finding tools for point-and-click management.

Why not, JumpCloud argues, outsource directory services to JumpCloud, using what it claims are its simpler management tools and its links between on-premise AD or LDAP directories and infrastructure as a service providers such as Amazon?

The pricing is certainly attractive, at $10 per user per month (with the first ten users free forever.) Chief Product Officer Greg Keller says JumpCloud makes it easier to extend user accounts to the cloud through its ability to mirror AD data to managed hosting services. It also offers agent-based authentication on devices to reduce the IT management load, “while enabling the customer to authenticate and authorize users with cloud infrastructure, IT applications and multiple devices types,” the company said.

Another target audience, it says, are companies that have ditched the Microsoft Exchange in favor of Gmail, but must still use AD to manage their users, devices and applications.

Onwards the Federation  

In the background is the promise of federated identity, in which companies an easily manage user identities whether they’re accessing apps and data in an internal data center or in the cloud. JumpCloud’s aim is not to provide such federated identity as a service but to provide ”One directory to rule them all,” a single authoritative source of information that replaces, or at least augments, either AD or LDAP, feeding identity credentials to other components within the enterprise infrastructure.

While JumpCloud’s directory is proprietary, it uses industry-standard interfaces to connect to a wide range of directories and other IT infrastructure. Because it allows AD to be hosted in the cloud rather than a corporate data center, it promises to provide the anywhere, anytime access users require, and remove the complexity of mirroring AD data to a directory server at each corporate location. If a user is out of Internet range, JumpCloud’s local agent stores encrypted credentials so the user can log onto their own device, and access any of the apps or data stored on it.

JumpCloud also claims to solve an even nastier problem: Tracking increasingly popular Apple devices like iPads and MacBook within the Windows-based AD without the need for expensive third party software.  It also allows companies to manage their growing clouds of Linux servers through “a single pane of glass” without complex scripts and processes it says are needed with continuous integration and deployment tools such as Puppet and Chef.

Nervous Nellies

What’s not to like? Well, last time I checked security was still the number one fear organizations have about the cloud, and directory data is in some ways the key to the kingdom. “Our company is probably going to do security better than yours, “claims Keller, since many of its engineers and managers are veterans of security-focused startups with skills from authentication and authorization to vulnerability assessment and security research. And since it can spread security spending across customers, he adds, it can make “investments in both technologies and time that would be cost prohibitive for our customers.”

JumpCloud relies primarily on Amazon Web Services for hosting, which Keller says is fully compliant with the highest level of the PCI DSS credit card security standards, HIPAA, SOC 1 type II and is ISO 27001 certified, among others. But with high-profile breaches, even among those who supposedly meet standards such as PCI DSS, I’m guessing security is still a big fear factor.

Second is the need, as when posting any critical data in the cloud, to get the data back in case something horrible happened to JumpCloud or AWS. The company says they’re in the process of developing such export tools, but I’ll bet a lot of customers will want that capability in the bag before signing.

The third challenge is that larger competitors will see the opportunity and out-execute JumpCloud on sales, marketing, operations or any other front.

So let the DaaS wars begin. But first, would you trust your directory to the cloud – and if so, what assurances would you demand of that cloud provider?

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Content Cookbook #2: Selling Security Response

(One in an ongoing series of sample IT drip content marketing campaigns. Feel free to steal this sequence or, if you’d like Content marketing security response sequence help customizing one for your needs, email or call at 781 599-3262.)

Antivirus products are “doomed to failure.” So says, of all people, Symantec, even though it gets 40% of its revenue from AV.

What’s up? For one thing, AV not a huge money maker. Second, hackers have moved on from endpoint attacks using viruses. The most serious threats now come from “zero day” network intrusion and denial of service attacks that target the core of the IT infrastructure and are too new to be caught by AV scans. As a result, Symantec and other vendors are trying to sell software and services that help customers limit the damage from attack.

If you’re selling security response services what sequence of marketing content can help you to identify and rate prospects for those services?

Story One: This captures prospects early in the sales cycle by clearly explaining the limits of AV, the nature of the new threats AV cannot stop and how security response, rather than prevention, can help limit the damage. Be honest about whether antivirus is really “dead” or is just not sufficient, in and of itself, to provide security. Get specific with recommendations without touting your product. Should customers, for example, just get basic free AV for end points and focus the rest of their efforts on hardening the core and on security response? If they shift more security spending to the network, specifically where should they invest? And what is the ROI of security response versus prevention?

Offer this content free and promote the heck out of it via emails and social networks. Repurpose it for videos, ebooks, blog posts, contributed op-ed pieces and Webinars. This is your chance to become the trusted voice of reason on this topic. The call to action (CTA) is a link to the more detailed stories 2 and 3 which are aimed at more specific market segments.

Story 2: Focuses on one subset of your target market with specialized content. To find SMB prospects, for example, produce a checklist they can use to determine whether this shift from prevention to response is true for them as well as for large companies. If basic AV is still necessary, what are the “must-have” features an SMB in particular should focus on? And if SMBs should start thinking “response” rather than just prevention, what are the basic “response” steps an SMB should take themselves, given their limited budgets, and what can best be done by an outside vendor?

Gate this content with two to three basic contact/qualification questions, such as name, business email and top security challenge they are facing. The CTA is a link to story three, pulling prospects further through the sales funnel to the product/vendor evaluation.

Story 3: To capture prospects that are in the “consideration” stage of the purchase process, offer tips for evaluating the security response services that are flooding the marketplace. Which of the services they are selling, such as centralized real-time monitoring or documentation and forensics of past attacks are most valuable? What of the incident response workflows they are offering will help limit the damage from each type of attack most effectively? What security response steps should a customer take themselves, and which should they leave to a service provider? What are some of the “gotchas” that could hurt a customer by choosing the wrong provider, and how can they avoid these mistakes?

Gate this content with two or three further progressive profiling questions, such as whether they have (or plan to) create a security response plan and their time frame for action. If you can combine this with third-party data to further qualify them, all the better. If they plan to act soon, the call to action could be a sales call to further discuss their response needs. If they’re months away from action, offer them a subscription to your email newsletter of security response tips, tracking their readership to determine if and when they might be open to a call.

Note: In place of each “story” in this sequence feel free to replace with “webinar”, “video”, “podcast”, “white paper”, or other format.) And if you have a product or service for which you’d like to see a sample, drop me a line or call at 781 599-3262.

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Next: DevOps for Databases?

Testing databases for DevOpsEarlier this year in a report for PricewaterhouseCoopers’ technologyforecast, I dove deep into the tools and processes required to merge development and operations and speed applications to market.

Yaniv Yehuda, the CTO and co-founder of DBmaestro, called to say we’d forgotten the vital other half of the DevOps story: The databases without which all those applications are pretty much useless. Without putting databases through the same rigorous version control process as the application code as they change, including compiling and debugging new scripts in the database, he claims, you’re pretty much asking for failures when the new code and databases hit production.

For example, if you want to add a customer’s Facebook likes or LinkedIn profile to their database record, you also have to add new code to your application to access that information and track their comments, likes and other actions. More than nine of ten changes in a database’s tables, he says, will influence how the application operates, and thus must be tested as part of the version control and release processes.

Comparing and testing database versions is more complicated than testing code, he says. “With code, you replace components from version `A’ with version `B.’ If you do that with a database, you lose its content, so you need to create a transitional code that tells the production database what to change to accommodate the new structures that were added in development- How to transform ‘A’ to ‘B’.” These new structures include schema, (table structure), database code (such as procedures and functions), and content used by the application (such as metadata, lookup content or parameters tables.)

Their DBmaestro TeamWork not only creates and helps debug the new scripts in the database, but also controls the change control process itself to ensure that important changes aren’t lost as distributed teams create branches, sub-branches or interim releases of new databases. For code changes this problem is solved through automated version control systems.

DBmaestro Teamwork links the development of the database code within the database to the management of the version control and deployment process itself. Even if the development process itself doesn’t change, he says, “You have to manage it, track it better, to know who did what when, and to prevent people from overriding each other.”

“This is an enforcer-type tool,” says Gary Leibowitz, the firm’s head of business development. “Anyone who tries to bypass the process is forced to run through it correctly. This allows the database, for the first time, to be an integral part of the change process.”

DBmaestro TeamWork is available now for Oracle and Microsoft SQL databases. Pricing is based on the number of active users.

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.
Dell Big Data

Click to read the full report.

Is Big Data – the analysis of a very large volume of varied data coming in very quickly – only for big companies?

Not at all, according to a recent report we at the Competitive Edge Research Reports unit of Triangle Publishing (with whom I am associated) wrote for Dell Software  . The report summarized results of a survey it commissioned on Big Data usage among mid-sized companies (those with between 2,000 and 5,000 employees.) Eight out of 10 of the 300 respondents agreed that they need better data analysis to meet their business goals. Virtually all (96 percent) have one or more big data projects in place or are starting one.

The most frequent improvements cited by those who have already deployed Big Data are increased product quality, greater ability to identify and exploit business opportunities and a better understanding of customer requirements. Expecting 25 percent greater benefits in many areas over the next two years, respondents predicted their Big Data budgets will rise to an average of $6 million over the next two years.

So mid-size firms are hot for Big Data. Where do they need help, and what marketing messages will they respond to?

Opportunities: Consulting

Let’s start with what respondents identified as the biggest drivers of Big Data project success: IT/business collaboration, having the proper skills and performance management to gauge the effects of Big Data initiatives.

This opens up, obviously, a bunch of consulting opportunities. One is the never-ending issue of how to get IT and business to define and manage IT projects carefully enough to get business benefits from them. Specific consulting needs that jump out from the results include:

  • Educating and coaching users on data analysis tools to help them identify, find, cleanse and use the right subset of Big Data to solve their most pressing business problems.
  • Helping users and their peers in IT develop complete and accurate business requirements, so the data geeks know which data and analysis to focus on, and
  • Helping customers choose, implement and understand the results of performance management tools, so they can understand how well their Big Data projects are working and how to improve them.
  • General change management and management consulting to overcome historical suspicion between business and IT, reluctance of business units to share data and “siloed” ways of looking at the business that get in the way of Big Data insights.

Note the powerful, underlying theme I’m seeing in a lot of my work for service vendors: The need for a more “industrialized” approach to IT that delivers consistent, repeatable, measurable services. Any frameworks, best practices, templates or proprietary tools service providers can bring to the table are worth highlighting.

Opportunities: Products

The products customers said they need the most focus not only on managing huge quantities of data, but understanding it in real time and easily sharing the results in a form business users can understand. Specific hot interest areas include:

  • Real time processing of data and analytics.
  • Predictive analytics.
  • Data visualization.
  • Access to cloud-based services to provide anytime, anywhere access to data and applications at lower cost.
  • Data aggregation that spans multiple databases.
  • Big Data platforms such as Hadoop
  • Data dashboards (desktop self-service data integration).

Note also that respondents expect their need tools that cleanse data (remove inconsistencies and inaccuracies) to rise significantly in two years. This is also an attractive area for services, as data quality requires changes to processes as much as technology.

Who to Target, What to Say

Given that budget limits are among the top barriers to Big Data projects, your case studies should highlight how you helped customers meet their top goals. According to the survey, the top three were  ”improve product quality, seize business opportunities and speed decision-making,” followed closely by “obtain better and deeper understanding of customer needs,” “quickly respond to competitive threats or other inputs” and “improve effectiveness of our marketing programs.”

Our findings showed that IT is most involved in Big Data projects, but sales/marketing was a close second. This shows, again, the very pragmatic and business-focused approach the mid-market is taking to Big Data. So does the fact that data from customer/ CRM, sales, manufacturing, supply chain/logistics and corporate financial systems are the types of internal data respondents are most important to Big Data projects.

In building your Big Data marketing efforts, then, remember to focus on business as well as IT needs, and on how you can help produce repeatable and measurable business results. For more information on the report or our marketing services, please contact Larry Marion at Triangle at

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.

Three Steps to Make Old Technology Cool

Three ways to make old technology cool again.

Even tape can be made exciting. Well, maybe not all tape…

Are you trying to get the media and bloggers interested in what they think is obsolete, duller-than-dirt or just plain irrelevant technology? We’re talking stuff like mainframe management tools, sequential “waterfall” software development, legacy databases or point-to-point middleware to connect applications.

All have been around for decades, none are new and cool and all tend to make one’s eyes glaze over. But all of this stuff is still in used. It’s often, in fact, essential to things like ATM networks, manufacturing automation systems or air traffic control systems. Without them, a lot of life would stop. And because they’re often too difficult, expensive or risky to replace, there’s a great story if you can explain how they can be revamped for use in today’s social, mobile, cloud environments.

And that’s where our opportunity, and our challenge, comes in: Pushing back on product managers to make sure they can explain their compelling new story about an older technology. Last fall, Spectra Logic did a super job getting me, and a bunch of other media/analyst types, excited about a supposedly over-the-hill technology: Tape. I was impressed with their new marketing message and since then realized there were three essential elements to it:

  • Assure the market you’re still alive and kicking: Spectra Logic told us they’re consistently profitable. growing and invest heavily in R&D. That takes care of the “company in a dying industry” reaction. And while they sell hardware, most of the value-add (and most of their engineering work) goes into the software that runs it. That makes clear they’re not just a commodity hardware vendor.
  • Pitch a visionary, industry-changing vision: Rather than just a “better, cheaper faster” version of their current products, Spectra Logic laid out a vision of how object-based, long-term storage of unstructured data helps their customers move to the cloud. Note how they cleverly linked old technology (tape) to new, hot buzzwords (object store and cloud.)
  •  Prove the vision: Spectra Logic had new products to announce, real big-name customers on-hand to say why they like what Spectra Logic is doing, and a road map for future enhancements.

How might messaging work for other “ho-hum” technologies or ways of working? Try these on for size:

Tired Technology Wired Vision *
Mainframe management We combine mature mainframe discipline and security practices with modern DevOps-style continuous code updates to safely share the valuable data in core “systems of record.”
Legacy databases Our proprietary parsing algorithms translate the data relationships at the core of relational data stores for use in today’s more scalable cloud-based platforms. You won’t lose the years of analysis and context built into your legacy databases as you move to less expensive, more scalable cloud data stores.
Waterfall software development Our workflow and code repository combines waterfall’s strengths in methodical, step-by-step planning and design with DevOps’ rapid time to market. You get reliability, predictable performance and faster code releases.
Point-to-point middleware Our “wrappering” technology lets you easily use and adapt your platform-specific point-to-point integration to a more flexible, fast-changing services-oriented world. You don’t have to re-invent the integration wheel to get the speed and flexibility of a service-oriented approach.

(Apologies to Wired magazine) Now, all of these are off-the-cuff suggestions from a mere ink-stained wretch. True technical wizards will come up with even more relevant and buzz-inducing ideas.  What these have in common, and what Spectra Logic did so well, was to showcase the underlying strengths of the “tired” technology and describe a road map for holding on to those strengths in a more “wired” world.

Our challenge as marketers is to refine our technology “rebranding” until it’s achieved these three goals. The challenge, of course, comes when our employers (or clients) insist on going public with a re-branding that isn’t fully baked. But that’s another story…

Author: Bob Scheier
Visit Bob's Website - Email Bob
I'm a veteran IT trade press reporter and editor with a passion for clear writing that explains how technology can help businesses. To learn more about my content marketing services, email or call me at 508 725-7258.
 Page 2 of 6 « 1  2  3  4  5 » ...  Last »