Thursday, May 17, 2012

Transforming Education



Although the data flow isn’t quite there yet for the personalization of stories in the media space the world of business and public data has matured to the point at which it can support both hyper-local and personal reporting.  More to the point, I would argue this kind of personalized reporting is the only way in which we will be able to draw out the actionable insights that is contained in that data.

In particular, the automatic generation of personalized reporting with feedback and advice holds the promise of transforming the landscape of education.

But what would this mean?

Let me start with an example.  My 14 year old started this year as a freshman in high school and ran into some problems with a Physics test.  The moment this happened, my first reaction was straightforward: go talk to your teacher and find out exactly what you did wrong.  My point was that it could have been anything: issues with framing the problems, misapplication of formulae, etc. Regardless, in order to figure out what to fix, he needed to know what was broken.

It turned out that he had two critical problems.  First, he kept forgetting to include units in his answers. Second, he would make simple computational errors, mostly having to do with flipping the signs in inequalities.  Both problems led to wrong or incomplete answers and were easily fixable.  But it was crucial for him to understand the problem in order to get to the solution.

Now, he goes to one of the better public schools in Chicago and –luckily– has a teacher who is dedicated to his students.  But the reality is that this situation is rare and as a result, this level of personal attention, communication and counsel is often impossible to get to.  In short, a one-on-one conversation doesn’t scale.

This problem is amplified as more and more of our educational practices move online.  It is hard today for all students to get one-on-one attention in the classroom.  As students move online, it is simply impossible.

Narrative Science has begun work to solve this problem.  We are starting with data generated by students who are taking online courses aimed at helping them with standardized tests.  In particular, we have an initial configuration of our Quill™ technology platform that is able to generate personalized reports to students who take practice tests each month that are not only retrospective, but actually provide specific advice as to actions each of them can take to improve their performance.

Which is to say, each student can receive a report after each practice test that goes beyond the expected feedback:
This is helpful, but we can go much deeper than this, because the data from the questions, what they test and the student’s answers is already there.  As a result, reports to the students can also include specific action items that flow from the data associated with each of the test questions, their level of difficulty, how other students are doing on them and what material they are designed to test.  In turn, the students can get highly personalized advice.  Because we know what they have done and what it means, we can provide them with the following:
The communication is personal, on point and contains focused action items. Ironically, although it is personalized, it is generated by a computer.  But when you are talking about massive volume (25,000 customized reports every two months) computer generation is the only way that it is even possible to reach the scale and comprehensiveness that is needed to best serve the students.

Testing and practice are part of the learning process.  By taking the data that is part of tests as they stand (level of difficultly of the questions, material being tested, types of conceptual errors that the wrong answers indicate) and using the results as a driver for focused communication, we can change the shape of educational progress.  By having analysis of that data be part of the process and having focused, personalized reports as the output, we can achieve scale and improve student performance by improving the feedback and the advice they need.

With the computer, we can create personalized communication and advice at scale.  In doing so, we can amplify and transform the educational experience in this country and the world.



Tuesday, May 8, 2012

The Power of Personalization


One of the advantages of automatic generation of content is the fact it provides an opportunity for personalization.  That is, news and information that is aimed directly to an audience of one… you in particular.  Many possibilities come to mind: stock market reports that reference your portfolio, a sports stories that mention players from your home town, medical stories that tie back into medication you are taking.  All of these types of stories could be potentially personalized.

Unfortunately, every time we bring this up, we hear from concerned journalists and experts who track the news business who are worried that such personalization will only amplify the already focused news consumption that is the “news filter bubble.” The worry is that people will only ingest those pieces of news that are directly aligned with their beliefs and interests and ignore content that is outside of their current scope.  If we personalize the content, they argue that this will only make matters worse.  People will read what they already want to read and the fact that the content is personalized will only reinforce their unwillingness to look elsewhere.

If we were putting together a news-based web site and thinking only about how to customize it to present content based on a particular reader’s interests, I would have to agree… that amplifies the bubble.   But we are thinking on a different scale.  We are thinking about the content itself and how it can be personalized and made relevant on the micro level rather than presenting content on the macro level.  We are thinking about how content can be made relevant to an individual even when they might not be already inclined to ingest (or understand) that content in its more generic form.

Recall for a moment 2009 when President Obama was talking about gas prices and suggested that people could save 3% on gas utilization by checking their tire pressure and making sure that under inflated tires were not dragging their mileage numbers down.  He suggested that another 4% could be saved if people kept their cars in tune.  For most people who read about this, his suggestion was abstract at best and he was ridiculed widely for trying to solve a global issue with a very local solution.  Even though he did point out that this would mean that nationally we would save about 10 billion gallons of gas (a national savings of about 30 billion dollars) people had a hard time with the notion.

The problem of course is that these are numbers and ideas that have nothing to do with the individual and so as individuals, we let them slide by.  And the reality is that a number or idea that is ignored is also an action that is not taken.

Imagine instead if the story had been genuinely personalized.  Imagine if the story had been focused on you, your car, your local gas prices and your driving habits.  For me that would be: given that I drive a 2002 Ford Escape, which gets around 16MPG, and I live in Chicago, where gas prices are at about $4.50, and I drive about 1000 miles a month… I could get a story that tells me that I could save about $20 dollars a month or $240 a year by making sure my tires were inflated while I gassed up my car.

This one little piece of personalization changes my relationship with the news I am presented.  It links it to my life and my concerns.  It in no way reinforces a view I already have but opens up the door to understanding how an abstraction is relevant to my day-to-day existence.

Narrative Science is by no means at this point yet.  This limit is not because of the technology; it can handle this sort of personalization just fine, but is instead a limit that comes from the availability the data. But this information, or more to the point, data, becomes more and more available, we will be pushing on how we can generate to that audience of one.  This is not so that people will only read those pieces of news that they already care about.  On the contrary, it is so they will understand through personalization of the news, those ideas and issues that they might have ignored because they could not see their immediate relevance.

Ironically, personalization as Narrative Science has begun to do it has the potential to burst the news filter bubble by letting people understand how facts, events and issues in the world can actually impact them as well as how they impact the world.

Next Week: The audience of one in the world of big data: Transforming Education

Wednesday, March 14, 2012

The Problem with Expert Systems or Why We Configure Our Authoring Engine

When Narrative Science works with clients, our standard engagement model is to take data as the starting point and sample documents as the target and then configure our authoring engine to transform the former into the latter. Our interactions are focused on working with clients to capture editorial goals, structure and voice in order to best configure the Narrative Science authoring engine to meet their needs.

Occasionally, an organization will ask us of why we don’t simply license the technology to them and let their own team do all of the work. While this type of engagement is on both our business and technology road maps, we have some very specific reasons for not doing so at this point. In large part, these are lessons learned from the field of Artificial Intelligence and the painful history of Rule-Based Expert System shells.

With that in mind, we thought it might be useful to hit on the high points of this history in order to help everyone understand why working with us still involves talking with people.

Expert Systems

The idea behind Rule Based Expert Systems is a straightforward one.
Step one: Start with an engine that will take rules, if-then rules in particular, and run those rules from some start some state to a conclusion.

Step two: For any given subject area, write down all the rules that an expert would use to solve problems in that space. Often these rules are expressed as a decision tree.

Step three: Write down all of the baseline facts that are true (and relevant) in the subject domain so the system has access to basic information about the world.

Step four: Give the system some new facts that will trigger rules and let it run to conclusion.
Now you have an expert system. In fact you have a classic rule-based expert system.

But given how easy this is, why don’t we have a million of these? Why doesn’t this blindingly simple and intuitive model work?

Certainly, the idea works, to a point. The reasoning is built on a centuries old foundation of logic. We all know the example:
All men are mortal. Rule: If (man ?X) (mortal ?X)
Socrates is a man. Fact: (man Socrates)
Socrates is mortal. New fact: (mortal Socrates)
You have a rule and a fact. These combine to create a conclusion. Works every time. If I have the rule and assert the first fact, I can now infer the new fact with certainty. There are nuances related to negation, lack of knowledge inferences, uncertainty, and whether knowledge bases are monotonic, but the core notion is clear.

And certainly the engines work. They just look at the facts on the left hand side of the rule and, if they are true (in the data) then assert the resulting facts on the right hand side as true.

And it’s not the horsepower. Even with a huge corpus of rules, today we have machines fast enough to figure out in real time what routes you could take from wherever you are to my office, by walking, using public transportation, or driving. Surely they can muscle through some rules. In fact, even in the early days of expert systems the horsepower was there for the rule sets they had.

Yet, even though the rules work and the engines work and the machines work, rules-based expert systems have been utter failures. Why is this the case? In particular, the problem is the product of a striking mismatch between the rules the machine needs to do its reasoning and our ability to map our understanding of the world onto them. It is a problem with knowledge engineering.

In the early days of Rule-Based Expert Systems, the people who built the systems wrote the rules. Researchers in Artificial Intelligence would ramp into new domains of practice and craft the rules needed for their systems to run. Now and again, they might take short cuts (why have a rule based system do your math when you can escape into the underlying programming language that can do it faster) but, in general, they were able to capture the knowledge needed to solve simple problems in limited domains on a regular basis.

Take, for example, MYCIN, a diagnostic expert system in the realm of medicine, had on the order of 500 rules crafted by researchers working with physicians. They had the expected structure with a little bit of probability:
IF the infection is primary-bacteremia
AND the site of the culture is one of the sterile sites
AND the suspected portal of entry is the gastrointestinal tract
THEN there is suggestive evidence (0.7) that the infection is bacteriod.
MYCIN had an accuracy rate of about 65%. Not all that bad when compared to the 80% accuracy rate of physicians who are specialists in diagnosing infections. This seemed promising, so researchers decided that they should commercialize this work. But in order to build these systems at scale, in order to have a software solution rather than a consulting firm, someone other than the researchers would have to write the rules.

This gave rise to a powerful battle plan. Rather than have researchers write the rules, everyone decided to have domain experts write them. Why not turn these systems into platforms or shells that end users, the customers who were buying these platforms, could use to configure their own systems? Given the early successes, this seemed like a plan that was not only viable, but also obvious. So why didn’t work? Why don’t we have complete penetration of expert systems in all aspects of business and personal decision-making?

It just turns out that it is really hard to write down the rules. It’s even hard to write down the facts. But that aside, most people with genuine expertise in an area often have intense difficulty introspecting on that expertise and reducing processes that are second nature to them down to an explicit rule set. It is particularly hard when those rules start interacting and end up creating a decision tree in which the conditions associated with each branch have to be explicit enough for a machine to calculate and run.

It gets worse when the rules are being written by multiple experts who may not even agree on the specific values associated with the these descriptions. And once the rules are written, they can really only be changed by people who do agree; people with a shared and now codified ontology.

No matter what, the rules get complex, the decision tree gets deep and broad and the special cases flourish. And in the end, you are at 65% accuracy. And even when you get the rules mostly right, any change in one can have effects that will cascade through the entire collection, requiring a complete reworking of the rule base.

All of this flows from a basic misunderstanding – a confusion between domain expertise and rule writing expertise. Not all experts can describe what they do to other people let alone describe it at a level of detail that can be transformed into machine executable rules. In fact, one could argue (and we do) that the nature of expertise is such that the vast array of assumptions that an expert uses when performing his or her work has to remain implicit if they are going to be able to think at all. Teasing out these implicit assumptions is a task that actually requires some level of skill and trying to do so without any background in knowledge engineering is painful and frustrating at best.

This problem, often called the knowledge engineering bottleneck resulted in the downfall of not only expert systems, but also led to a decades long dry spell in work in Artificial Intelligence that we are only now coming out of.

What We Do and Why We’re Different

As I mentioned earlier, the Narrative Science Authoring Engine is not an expert system. But it is unquestionably an Artificial Intelligence system. It uses knowledge of both domains of interest (sports, finance, politics, logistics, etc.) and writing (journalism, client communication, performance reporting, etc.) to create the stories it produces. It also has a layer of analysis that it applies to the data it processes that, for many writers, was never even close to being part of their core expertise. This kind of expertise is simply not in the skill set of most writers and trying to impose it on them is not what we ever want to ask our clients to do.

Because of this, we currently configure the engine in house. Of course this configuration is performed by an editorial staff with expertise in the areas in which they are working. But they are writers who have done considerable work introspecting on their own skills in an effort to transform themselves into not just writers, but meta-writers. As a result, they have skills that are based on a foundation of a deep understanding of their craft. Skills that also include the ability to talk with other writers, in particular our clients’ editorial staff, and transform those conversations into configurations that capture the right tone, structure, analysis, and language.

And once these configurations are complete, the resulting installations can write stories instantaneously at tremendous scale with exceptional quality.

As we move forward, we are mapping these skills onto tools that make the process of configuring the engine more transparent, more fluid and in line with the traditional task of actually writing a story. As we refine these tools for our own use, we will begin to roll them out to clients so that they can become meta-writers as well. But we will only do so when we are confident that the stories the system produces stay at the quality level that we strive for as a company.

So, at some point in the foreseeable future, we will provide clients with a pure software solution, but only after we have mapped our own skills in knowledge engineering onto tools in the same way that we have mapped writing skills onto the core engine. In the meantime, when you work with us, you still have to talk with people!

Thursday, February 23, 2012

Why multi-lingual generation of content from data is important

Multi-lingual generation of content from data has always been on Narrative Science's road map and has informed the modularization of the core platform. It is only after all of the analysis of the facts, evaluation of their importance, and the composition of the representation that the system generates language. Within this model, generating in Spanish, Japanese, German, etc. is no different than generating in English.

The system is not designed to translate, but to generate in multiple languages.

In general, we are not ready to do this, mostly because of the composition of our client base, but doing so is a matter of puling in native speakers who know how to write in non-English languages to configure the platform for the new language.

Occasionally we are asked why we even care, given the rise of translation services. Along with the theoretical answer, that translation requires hard core natural language understanding to really get things right, we also see wonderful examples in the real world.

My current favorite is from a translation into English of a story in Japanese about Narrative Science and Storify. I have no idea what the initial concept was in the original Japanese, but it was rendered into English as:

Than what was in the automatic translation of English to Japanese over Google, has become a meaningful sentence smoothly through many times.

This is strikingly poetic, but more important, a clear argument that opportunities for automatic generation of multi-lingual are still out there.

Tuesday, February 14, 2012

Generating stories from social media: Getting to the meat of the tweets

The problem with social media is that there is just so damned much of it. No matter how you want to slice and dice it, the shear volume is overwhelming. Unless you are looking at a topic or entity for which there is only a trickle of traffic, there will certainly be more information in the stream than a human can deal with on an ongoing basis.

Curation, that is filtering by topic or keyword, has its role, but the reality is that aggressive filtering using terms, sentiment, authority, and location only has the effect of cutting the hundreds of millions down to tens of thousands, a number that is still unmanageable from a human perspective. And as to readability, short lists are still lists.

The question comes down to what is the goal? That is, what insight do we want to draw from the stream and how do we want to communicate it?

Of course, at Narrative Science, our view is that we want to transform the massive stream of data that flows through the firehose into stories that are human readable and express the insights that are hidden within the stream. In order to do this, we have to track, filter, tag and organize the unstructured stream into a semi-structured data asset that can then be used to support automatic narrative generation.

Our first foray into this work has been to look at the twitter traffic related to the Republican primary candidates. Using a focused data stream, our technology captures and tags the ongoing conversations and then transforms the resulting data into stories. Our first story type is focused on how the candidates are trending and what topics are the drivers behind those trends. Linking the stream to events in the world, the primaries themselves, our engine can produce a daily report that captures a snapshot of where the candidates are and what issues brought them there.

While it is still in beta, we thought it might be nice to provide a peek of what is coming with regard to how we are using an ongoing stream of tweets to generate stories that express the state of the world in a form that is ever so slightly more human.

NEWT GINGRICH GAINS ATTENTION WITH HOT-BUTTON TOPICS TAXES, CHARACTER ISSUES

Newt Gingrich received the largest increase in Tweets about him today. Twitter activity associated with the candidate has shot up since yesterday, with most users tweeting about taxes and character issues. Newt Gingrich has been consistently popular on Twitter, as he has been the top riser on the site for the last four days. Conversely, the number of tweets about Ron Paul has dropped in the past 24 hours. Another traffic loser was Rick Santorum, who has also seen tweets about him fall off a bit.

While the overall tone of the Gingrich tweets is positive, public opinion regarding the candidate and character issues is trending negatively. In particular, @MommaVickers says, "Someone needs to put The Blood Arm's 'Suspicious Character' to a photo montage of Newt Gingrich. #pimp".

On the other hand, tweeters with a long reach are on the upside with regard to Newt Gingrich's take on taxes. Tweeting about this issue, @elvisroy000 says, "Newt Gingrich Cut Taxes Balanced Budget, 1n 80s and 90s, Newt experienced Conservative with values".

Maine recently held its primary, but it isn't talking about Gingrich. Instead the focus is on Ron Paul and religious issues.

It is only the beginning, but we see this as the first step in wrangling the firehose and turning the stream into stories.

Wednesday, January 25, 2012

Stories are the Last Mile in Big Data

Conventional wisdom says that in order to understand anything in business, you need to track it. When it comes to sales, logistics, customer service, employee performance, call centers and all of the other issues that drive a business, knowing how you are doing is the first step in understanding how to do it better.

Fortunately, the rise of hyper, low-cost computing and storage, combined with the drive towards bringing more of our data online, has given us a world in which we can now monitor and measure nearly every aspect of running a business. On top of this, we now see a huge surge of information coming from the social sphere that can be harvested and harnessed for business intelligence purposes.

World of Big Data
But data is meaningless unless it can be converted to insight. Holding onto the record of every call into your customer service center and all of your product returns is not helpful unless you can establish a correlation between the different dimensions that each captures. Shipping and delivery records make no sense if they are not linked to the features that contribute to on-time performance. Knowing about error rates in production is of little help if you don’t also have records of raw materials and parts from different vendors and work shift information.

Data alone isn’t the answer. In fact, from a business perspective, the data is still part of problem. Insight is the answer, which is derived from the data.

As I write this, numerous organizations, both commercial and research, are attacking this problem. There are substantial efforts aimed at using data mining, correlation analysis, machine learning, recommendation engines and anything else that people can think of to solve the problem of understanding the relationships between the data elements that will help to inform and transform business practices. However, these approaches are often driven by what is doable and interesting from an engineering perspective, rather than useful and readable from a business user’s point of view.

Even when Big Data is drawn down to a usable form which, for lack of a better term, we can call “small data”, the question still remains: “How do I communicate with my users?”

Tables are fine, but they do tend to be hard to deal with regardless of the size of the data sample. Looking at the table below, which is derived from point-of-sale information consisting of millions of rows of data, there is nothing exciting that immediately leaps to mind.

Even when you are motivated to do so, how do you approach reading a table like this without risking a rapid and dramatic loss of interest?

Of course, visualization is often touted as the solution, but is a chart based on these numbers that much better?
While relationships can be drawn out between both the number in the table and bars on this graph, there is still the issue of the level of effort and attention that it takes to pull out the simplest of observations.

So, how about this instead:

Store 9, your sales of Item 6 are far below the other stores in your region. If you are able to up your sales of this product by only 5 units a day, you will be able to increase your profits next month by $1,123. The sales of this product for other stores in your region seem to indicate that this is completely achievable.

Of course, I am cheating here because I am also including data associated with the profit margins of these products that are presented in other tables/charts on other pages not included here. But that is the reality of reports. Also, the message is aimed at a particular store, but isn’t that who should be reading this message anyway? For any pool of data, there are always going to be multiple stakeholders, and they should each be receiving their own targeted messaging.

The point is simple. Data is not the target. Data is not the answer. Data is not the insight.

Rather, data is the enabler for the real target: Insight that is communicated to the right person at the right time, in the right way. And although the above story is short, it is clear, clean and to the point. And it is aimed at a business problem that can be addressed through the appropriate analysis of the data and, just as important, the appropriate communication of the message.

I love Big Data. The move towards Big Data has the potential to change the way we do everything. But the last mile has to be the Story; the Story that communicates what is happening in the world, and what needs to be done to fix the problems and exploit the opportunities that analysis exposes.

Of course, this assumes two things:

1. You have some idea to begin with of what stories you want to tell. If you have no idea of what you want to achieve from your data, the likelihood of getting to something of value is low at best. Of course, there are counter examples to this rule, but they tend to be few and far between. But, if you know what sorts of potential stories there are, what you want to get from the data, then the analysis required to get the insights hidden in it can be focused and effective.

2. You have a technology that allows you to transform that insight at the data level into crisp, clear and focused reports. As Narrative Science has demonstrated, this capability is not only possible, but is actually practical at a level that enables the generation of stories from data at tremendous scale. Going from data, to insight, to 10,000 individuated reports on a daily basis is not an idea, but a reality.

So, while the Story is the last mile in Big Data, it is also the first step. Knowing the stories you want to tell gives you the focus to do the analysis that will allow you to tell them. With the Story, Big Data becomes the transformational force that we all want it to be.

Wednesday, December 7, 2011

Why 90% of news will be computer generated in 15 years

At News Foo Camp, I was asked about how much news will be computer generated in 15 years.  My reluctant take on this was that it would be on the order of 90%.  My reluctance was the result of the fact that while this strikes me as inevitable, it always leads to a fair amount of angst among the people who hear it.  With that in mind, it seemed to me that it might be a good idea to explain what that number means and why I think it makes sense given current information and technology trends.

Data availability

First, given that we are talking about content that is generated from data, that is unambiguous machine-readable data rather than human readable text, it is clear that one of the key drivers will be the availability of the data itself.  There is no question that more and more data, in sports, finance, real estate, government, business, politics, etc. is coming online.  This trend is clear, unstoppable, and even a genuine social good if you believe in transparency.

Likewise, as more and more of the transactions and operations associated with business and commerce are happening online and being metered, we will actually be creating new types of data that describe the world and how it functions.

As the trend continues and accelerates, there will be tremendous opportunities to mine this data, gather insights from it and transform those insights into narratives that can help to inform the public.  Many of the tasks associated with data journalism as it stands today will be given over to the machine (under the control of editors and writers) and enable us to generate compelling narratives at scale that are driven by the all of this data that better describes our world.

But this trend is really only about data as data.  That is, data that is unambiguous and machine readable rather than textual information that is still only understandable by human readers.  The world of human-readable text is a different matter.  This leads me to the next trend.

Turning text into data

On a parallel path, language understanding and data extraction systems are improving to a point at which much of the information that is currently human readable yet impenetrable to computers will be itself transformed into data; data that can be used as the driver for the generation of new narratives.

This means that textual descriptions of events, government meetings, corporate announcements, plus the ongoing stream of social media will be transformed into not just machine readable, but machine understandable representations of what is happening in the world.  This data will then be integrated into the expanding data sources that are already available.
Stories currently driven by game stats, stock prices, employment figures etc. will be augmented and improved with information, now transformed into data, about off-field player behavior, business strategy, and city counsel meetings in ways that will allow human guided systems to automatically create richer and richer narratives that weave together the world of numbers and events.

But these two trends, combined with computer generation of content, only take us so far.  Given the current models of content creation and deployment, we still think in terms of content in the broad, which limits the scope of the content that we even consider creating.  That leads to a third element: scale and personalization.

Scale and the long tail

As journalism adjusts to the new world, it is clear that in many sectors, there is a need for content that is more narrow in focus and aimed at smaller audiences.  This more targeted content, while not valuable to a broad audience, is tremendously valuable to the smaller, niche audiences.  Stories about local sports and businesses, neighborhood crime, and city counsel meetings may only be of interest to a small set of people, but to them, that content is tremendously informative and useful.

The problem, of course, is that these audiences are often too small to warrant the kind of coverage they deserve.  The economics of covering Little League games makes it infeasible for organizations to provide the staff and publication resources to produce them.  Logistically and financially, it is simply impossible for organization to produce hundreds of thousands of stories, each of which will likely be read by fewer than fifty people.

As the data becomes available and the computer develops a better understanding of events, however, there is an opportunity to create content like this at tremendous scale.  This is an opportunity that only makes sense and, in fact is only possible, through the computer generation of stories.  A computer can write highly localized crime reports, personalized stock portfolio reporting, high school and youth sports stories at scale to provide coverage that was previously impossible and could never be possible in a world of purely human generated content.

Man and Machine

These three trends (and there are others) come together to provide an opportunity for the use of computers to automatically create content that serves communities that are currently ignored by the current world of journalism and story production.  By creating content that integrates existing and derived data and providing stories that are not just local but actually personal, these systems will generate directly into the long tail of need and interest.

As more and more data becomes available and individuals are given news that is designed to be personally relevant and informative, systems will end up generating far more than is produced today with a volume that dwarfs present production.  Because much of this production will be for the individual, it will never overwhelm, but will instead provide a new sort of news experience in which the events of the day, the events of the world, will be provided in a personal context that makes it more meaningful and relevant.  That 90% of news will be computer generated fifteen years from now seems not just reasonable, but inevitable.