On Friday February 11th 2011, the Obama administration released its plan for reforming the US housing market. The bulk of the plan deals with the future of the Government-Sponsored Enterprises (GSEs) – FNMA, FHMLC and GNMA. As part of the release, the administration reiterated their commitment to winding down Fannie Mae and Freddie Mac and proposed several potential future models for housing finance in the US. All three models are dependent on the return of private capital to the housing market to replace the GSEs, which currently insure or guarantee more than nine out of ten loans made in the wake of the credit crisis.
As we have expressed in previous postings (most notably “Can You Hear Us Now, Mr. Geithner?” ), data transparency in the mortgage markets is of utmost importance, and the reluctance of the GSEs to provide such data has been a thorn in the industry’s side for quite some time. A market shift from public to private financing would have many benefits, not least of which would be the reduction of federal exposure to the ups and downs of the mortgage market. But of equal importance would be the increased availability of loan-level data for mortgage-backed securities, which would allow investors to truly understand the risk composition of their investments. 1010data applauds the intent of the administration’s plan for reform, and looks forward to seeing the results of its implementation.
The Harvard Business Review blog recently posted an article entitled “21st Century Medicine, 19th Century Practices.” In the article, by Ashish Jha, an MD, MPH, discusses the dichotomy of medical professionals having “the latest in cutting-edge devices and surgical therapies… while the system that helps us deliver that care has changed very little.”
In recent years we have actually seen a greater adoption of electronic health records (EHR). Jha remarks “The federal government has gotten involved as well, offering nearly $30 billion in incentives (as part of the 2009 stimulus bill) for doctors and hospitals that adopt and ‘meaningfully’ use EHRs.”
However, Jha also discusses the difficulties both culturally and economically of using that data effectively and sharing it across various groups. At 1010 we empathize with this concern, and have experience bringing data analysis across disparate user types with different needs.
He ends with “To really transform healthcare, we need a 21st century health care system where incentives encourage sharing of data and collaboration between providers, not just care in silos. So yes, the U.S. healthcare system is at a crossroads — but we all know which path we’re going to follow. Despite the naysayers, we will modernize healthcare through information technology. We have no choice; we simply can’t improve the efficiency of the healthcare system without it.”
We at 1010 believe that our model of cloud-based analytics can provide an incredibly easy way to share data across groups. The incentives can be properly aligned by using a vendor portal configuration where major health organizations will store HIPPA compliant anonymized data. Pharmaceuticals and medical research groups can access this data for analytical research purposes. Our lightening fast backend can also provide an analytic and data warehousing backend to current EHR applications used in hospitals.
The major health organizations including insurance companies will gain a unified data center to improve subscriber care, increase revenue opportunities with pharmaceutical and hospitals and cut their data costs. The users of the data will gain a central location to analyze huge amounts of data.
Andy Hayler of research firm Information difference wrote a nice article for CIO: Lots in Store for Data Warehousing (I am not sure if the pun in the title is intended or not, but it does seem to work well as one). He covers some of the history of the technology, e.g. cites relational databases and OLTP, discusses what is hot today and how this relates to where the technology is going. Andy writes:
Some markets which appear to be mature can suddenly become exciting once more. One of the earliest mainstream enterprise applications was the database… But once the relational database became widely accepted there was only a brief period of competition before the market was carved up between Oracle, IBM and Microsoft… Yet in the last five years or so there has been a flood of new entrants to the market, some using quite different database designs from traditional ones. What happened?
He then focuses on some of the forces that have led to disruption and new competition: chiefly, the growth of data and strides in computer design and database architectures. Hayler continues:
This combination of specialist software and hardware aimed at data warehousing has become known as an appliance, though the definition is a little blurry, as some appliance offerings… can operate in the cloud, so do not require hardware on site.
The article focuses more on hardware and software and mentions the cloud only in passing. We believe that the cloud can be not just a delivery mechanism and another way to deploy an appliance for data warehousing – cloud-based analytics can open up new opportunities, and lead to wider adoption of self-service, turnkey analytical tools across the enterprise. They can help achieve another long-sought-after goal that Hayler writes about:
Sheer size of data has not been the only issue — the need to analyse large volumes of data in something close to real time has allowed further specialization…
…the increasing desire for near real-time analysis and the inexorable rise in the volumes of data that organisations need to handle, promise to keep things lively in the data warehouse market for some time.
1010data and others have proved that cloud-based analytics are one way to achieve these goals.
I read an interesting article in The New York Times (2/12/11) this past weekend regarding a “Black Hat” trick that caused a certain retailer to continually pop-up at the very top of a popular search engine. Regardless of what a searcher might enter in, including brand name items, the retailer continually appeared in the top slot (of the unpaid portion of the page). As it turned out, the reason it was deemed a “Black Hat” as opposed to a “White Hat” trick was that the results were due to invalid data that tricked an algorithm.
So, my brain began to wander and a question came to mind. How would the fast-paced, millisecond mind space of today’s discerning researcher (whether it be a consumer shopping or an analyst determining a lift in sales) be able to validate that what was before them was actually accurate? In the article, the newspaper was able to hire an investigator of sorts, but in the real world, such resources aren’t always available.
In business, results are often tallied, averaged, summarized and aggregated and, while there is nothing wrong with this, one should always be able to validate from whence the results came. Should a business always trust implicitly the results in front of them? What if something odd or unexpected should appear? Do you have a way to validate your analytical findings? And maybe the most important question I would love to hear back on is: do you have a transparent view into how the results presented to you were derived, and can you get to them in the instant you need them?
If you could ask any question about your business, what would that question be? While it may be a fun problem to contemplate, most businesses find that they simply don’t have the tools to do this. When that happens, decisions can be made based on guesses or in a vacuum.
How much does it cost your business to not be able to ask any question of your data? Unfortunately, too many businesses have tolerated suboptimal tools for measuring Key Performance Indicators (KPIs). As a result, while there are a lot of things that they should (and could) know, they only discover what their tools can report.
The good news is that things are changing. More and more, the press is buzzing about the need to tackle “Big Data,” and the acceptance of the private cloud as a viable alternative for getting there. As a case in point, AutoZone recently switched their analytical toolkit to tackle not only big data but get their arms around the complex questions that they were never able to ask before. Click here and read the Integrated Solutions story about what AutoZone discovered just by being able to ask any question. After you read this article, we’d love it if you could share with this blog and our readers the question: “If you could ask any question of your data, what would it be?”
An article in the Star Tribune, Cloud of Secrecy Surrounds Credit Scores, revealed new credit scoring methods beyond FICO, the standard that many people are familiar with. It strikes a negative and alarming tone, which is not surprising given the title, and says that consumers should be concerned about the use of their data and lack of transparency into the new methods. Here are a few words from it:
Banks and credit card issuers are digging deeper into people’s lives, using powerful new tools. Beyond the credit score, they are looking at where you live, how often you switch jobs or telephone lines, whether you get paid with direct deposit and other factors to decide whether you deserve a loan. And unlike the well-known FICO score, which can be easily monitored online, most of the new scoring models have been developed solely for the use of lenders. As a result, would-be borrowers may be denied credit based on factors they never knew were relevant.
Looking beyond the main point, which presents a valid concern, the article casts the industry – more specifically lenders, data providers and credit reporting bureaus – as villains and consumers as hapless victims.
We would never argue that consumer privacy is not important – and consumers should of course understand how data about them is accessed and used, and how this can affect lending decisions. At the same time, it is important to point out that transparency can work in many ways – increased industry access to data can often help consumers – and not just those who with the pristine credit records.
E.g. as we said in our article Can You Hear us Now Mr. Geithner, it is important for those in the mortgage-backed security business to have access to certain types of data. This helps them in their work, but it also can help consumers and directly affect things like availability of capital for lending and mortgage interest rates (and be done in a way that does not jeopardize privacy concerns). We wrote in that post about how lack of access to consumer loan level data from Fannie Mae and Freddie Mac can have an adverse impact on the price and availability of mortgage credit for consumers. I encourage you to visit the link and read this post, which explains in detail why this is true.
In summary, it is simplistic to paint the industry as villains and consumers as hapless victims when access to consumer data is the topic; transparency can work many ways, and industry access to data can often help consumers.