Tegile – IOP Advantages With Unified Hybrid Storage

Tegile Highlights

  • A small US-based company, looking for a measured international expansion
  • Builds flash storage appliances using Hitachi SSD and Open Solaris-based software
  • Improves IOP performance through meta data acceleration
  • Provides capacity reduction through de-duplication and compression
  • A strong potential for growth and/or as an acquisition target


The world is full of startups – especially in the storage market where SSD continues to create disruption. There are also clear exit strategies for these small companies as the major vendors snap up the new technology (see Figure 1), enhancing their portfolios and expanding sales through their established sales forces. Tegile, launched in January 2012, is one of the youngest new companies: it is combining DRAM, SSD and spinning disk together and should enjoy a positive future, whether on its own or as part of a larger company. You’ll want to hear about its approach.

Who Is Tegile?

Tegile started operations in January 2012 with around $12.5 million VC funding from August Capital and others. It currently has around 40 employees and revenues of less than $10 million (although as a private company it doesn’t disclose them). It is concentrating on selling in the US market for now and a measured expansion in Europe before addressing the Asian markets. It is ramping up its product introductions, running a lean, cost-effective business and investing in its field sales force. Despite having no formal programme as yet, 40% of its sales have been through VARs to date.
Its customer references currently include Washington and Lee University, ipHouse (Service Provider) and Starwood Capital Group, for whom it has improved IOPs, reduced capacity and lowered the price of storage.

Zebi Combines DRAM, SSD and Spinning Disk – Big On IOP Performance

It has 8 Zebi hardware offerings at the moment – products which combine large DRAM caches, optimised SSD and high capacity HDDs. Performance in IOPs stretches from 30k for the HA2100 to 200k for the HA2800F. It layers its software on top of an Open Solaris operating system. It layers the work its done on block and file protocols, as well as its MASS (meta data acceleration for RAID, deduplication and snapshot pointers) on top of ZFS.

Capacity Reduction And Performance Maximisation

Its ‘capacity reduction’ process involves thinly provisioning volumes, de-duplicating redundant data and compressing what’s left: it claims that its customers use up to 75% less storage capacity as a result. This process is aimed at reducing the amount of disc used by cutting down the size of the virtual machine store and compressing what remains in server virtualisation, while it also claims to break the linear ‘desktop: spindle’ correlation in VDI implementations.
Its ‘performance maximisation’ process involves extending the cache to the SSD pool, allowing meta data to be processed in SSD; it also provides ‘meta data acceleration’ and allows the caching o the hottest data in a large DRAM memory pool.

Some Conclusions – New Storage To Address Virtualisation Challenges

Tegile takes a traditional approach to the storage systems market, aiming to have the cheapest, best performing arrays. It uses its deep technical knowledge of advanced virtualisation to help customers to reduce the amount of redundant data and compress what’s left. It isn’t addressing the emerging customer need for heterogeneous array attachment and ‘available as software’ approaches to the storage market as IBM, Virsto and others do with their storage hypervisors. It is a brand new company aiming to build its international presence. As often in our market it is likely to get a lot of press. It might even be purchased by one of the large storage systems suppliers in future, as so many other similar companies have been (see Figure 1).

6 Responses to “Tegile – IOP Advantages With Unified Hybrid Storage”

Read below or add a comment...

  1. Hi!
    I thought one one area of issue with SSDs is the different read and write time on SSD so I Googled wikipedia and found this paper
    http://www.stec-inc.com/downloads/whitepapers/Benchmarking_Enterprise_SSDs.pdf

    All a bit shocking really. So an empty SSD performs well but not so well as it fills up. Aslo, mixing Read and Write can slow everything down.

    I didn’t realise the some SSD use RAID stripping! It appears the design quality of the controller chip is crucial to performance similar to card RAID controllers.

    I know that the little mini SD drives you find in Smart phones wear out if you write more than 100,000 times so they are not suitable for O/Ss.

    I thought the new RAM SSD had overcome this problem.

    It seems the technology has improved but they use many work-arounds, playing with expensive longer-lasting SSD caches on chips controlling cheaper SSD memory chips, lower voltages, better redundancy, wear-levelling etc.

    So the question is, how long will a DRAM SSD last in normal server use?
    The answer seems to be comparably well with disk drives, if you buy the best quality. This success is based on the complex and enhanced technology being used under the hood.

    Only drawbacks seem to be little warning of failure (except possibly increasing slowness) and cost.

    So it seems the clever theniques and technologies offered by Tegile and others are feasible, provided they match the hardware used. They should be all SSD for best performance at great financial cost though.

    Rich Kightley

    Addendum
    _________

    Slightly off-piste, checking wear-levelling on Wikipedia found this shattering statement

    “Conventional file systems such as FAT, UFS, HFS, ext2, and NTFS were originally designed for magnetic disks and as such rewrite many of their data structures (such as their directories) repeatedly to the same area. Some file systems aggravate the problem by tracking last-access times, which can lead to file metadata being constantly rewritten in-place.”

    So that is why disk-drives are forever crashing, the O/S lacks sophistication to use wear-levelling, so a wear-levelling disk drive is a must. How much pain have we suffered due to MSs NTFS and Solaris’s UFS?

    Also explains why temporarily rescuing a bad disk by setting a small unused blank partition where the O/S used to install itself, and reinstalling on the second partition, seemed to work.

  2. Rich
    Many thanks for your comments. You might want to look at XIO as well. We wrote it up at http://rainmakerfiles.com/?p=4219, which contains a brief description of other approaches.
    Best Wishes
    Martin

  3. In response to Rich Knightly’s comments on SSD usage. You are correct Rich. There are alot of companies out there who still use old RAID contructs on SSD. Most in fact and this is not a viable strategy going forward. Seperation of Read and Write caching with SSD is extremely improtant for best performance. Also you are correct about the impact of particular file systems on SSD writes. Designing a great system is about total system design. That is why a grounds up new operating environment developement is the best way to ensure that SSD is used effectively. i have a blog post here that talks about the benefits of a caching approach based on grounds up system design. You get the performance of SSD at the cost of spinning media http://blog.starboardstorage.com/blog/bid/189631/Unified-Hybrid-Storage-The-best-use-of-SSD

  4. Thanks Lee. You have a great blog. I’m off to SNW to find our more.
    Best – Martin

Trackbacks

  1. […] variable weekend, late-night or other discounted schema. Like other start-up storage companies (Tegile, Xio, Avere, Violin Memory, etc.) SolidFire has the challenge of persuading customers to trust […]

  2. The In’s and Outs of san emc storage Storage Line…

    Tegile – IOP Advantages With Unified Hybrid Storage – ITCandor…