Methodology note | Sentieo https://sentieo.com/category/methodology-note/ Fri, 16 Oct 2020 00:47:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.7 Guide to Using Alternative Data In Equity Research to Deliver Alpha: The 13 Stock Picks for H2 2019 https://sentieo.com/guide-to-using-alternative-data-in-equity-research-to-deliver-alpha-the-13-stock-picks-for-h2-2019/ https://sentieo.com/guide-to-using-alternative-data-in-equity-research-to-deliver-alpha-the-13-stock-picks-for-h2-2019/#respond Tue, 16 Jul 2019 14:03:38 +0000 https://sentieostg.local/blog/?p=5500 Alternative data has become a mainstream source of data for investment managers, with up to 50% of funds now using it as a part of their research process. The question of how to get value from alternative data is no longer about access to the data sets; there are hundreds of vendors offering alternative datasets...

The post Guide to Using Alternative Data In Equity Research to Deliver Alpha: The 13 Stock Picks for H2 2019 appeared first on Sentieo.

]]>
Alternative data has become a mainstream source of data for investment managers, with up to 50% of funds now using it as a part of their research process. The question of how to get value from alternative data is no longer about access to the data sets; there are hundreds of vendors offering alternative datasets or services. Rather, it is a question of how to make that data useful to every equity analyst driving investment strategy.

Download the Picks Whitepaper now.

Large funds have made multi-million dollar investments in data science teams and big data infrastructures in an attempt to win an alternative data arms race that is predominantly driven on the hope of finding the needle in the haystack, big bet investment. However, with our 13 Alternative Data Stock Picks, we have proven that the most successful approach to alternative data is to directly embed it into the core research workflow for analysts, effectively democratizing alternative data.

This guide outlines how we used the Sentieo platform to pick our initial 11 stocks from the first half of the year, the performance of these stocks, and our Alternative Data Picks for the second half of 2019.

How Sentieo Makes the Alternative Data Stock Picks

For both the original Sentieo 11 and the new Sentieo 13 picks, we used the same methodology:

We started with the Sentieo Mosaic screen, where we looked for:

  •     A high correlation between the alternative data composite and revenue growth and/or KPI
  •     Large acceleration in the alternative data (our proxy for end-user demand) versus the consensus expectations

These correlations and significant changes in trends are what drive analyst usage of Alternative Data.

Usage of Alternative Data in Mosaic is easy to customize and is completely transparent: users can see basket weights, as well as data set performance for different time frames.

Our next step is to marry the broad screen results with our team’s 60+ years of collective fundamental, qualitative investing experience. Sentieo augments human decision-making: the charts and the stats will not give you the “why.” We do not adhere to specific investment style boxes, but we do look for revenue growth as the single largest driver of long-term results. No business ever shrank its way to greatness.

The ideal picks have strong revenue growth because they:

  •     Operate in high-growth industries, supported by long-term secular megatrends
  •     Are the leaders in their respective industries
  •     Tend to be underpriced relative to their growth rates

As a result, this set of long ideas has relatively high near-term P/E. We also looked at earnings momentum through a combination of the classic earnings upwards revision, plus our alternative data Mosaic index. Most alternative data sets do come from consumer-generated data, and most of our picks are in these two broad groups. As more and more consumer behaviors shift to digital, we expect to see the alternative data sets become more and more predictive.

Sentieo 13 H2 2019 Alternative Data Picks

Our latest set of picks is based on exactly the same methodology as before, but we have expanded our focus a little wider. Note: These are not stock recommendations, and we are sharing them to show the power of the Sentieo platform in bringing together the power of a complete financial research platform with both traditional and alternative datasets.

1) SNAP

Interactive chart

2) PLNT

Interactive chart


3) TWTR

Interactive chart 

4) CROX


Interactive chart

To see Sentieo’s 9 other picks, download the full whitepaper here.

We’ll also be discussing all the picks and how we made them during our upcoming live webinar, featuring Sentieo’s CEO, Alap Shah. Register here.

Disclaimer

The content of this report references opinions and is presented for product demonstration purposes only. It does not constitute, nor is it intended to be, investment advice or recommendations. Readers should assume that Sentieo staff members hold direct and/or derivative positions in all securities mentioned, and may transact in any and all of these securities, at any time, without notice. Seek a duly licensed investment professional for investment advice. Sentieo is not registered in any investment advisory capacity in any jurisdiction globally.

The post Guide to Using Alternative Data In Equity Research to Deliver Alpha: The 13 Stock Picks for H2 2019 appeared first on Sentieo.

]]>
https://sentieo.com/guide-to-using-alternative-data-in-equity-research-to-deliver-alpha-the-13-stock-picks-for-h2-2019/feed/ 0
Wall Street Consensus Trades Fell Apart in 2016: 16.2% Underperformance https://sentieo.com/wall-street-consensus-trades-fell-apart-in-2016-16-2-underperformance/ https://sentieo.com/wall-street-consensus-trades-fell-apart-in-2016-16-2-underperformance/#respond Tue, 17 Jan 2017 14:30:25 +0000 https://sentieostg.local/blog/?p=1865 Following Consensus Trades worked in 2013 and 2014, and started to lose some money in 2015. After running the numbers, we were shocked to see this developing consensus underperformance trend accelerate by 1270bps for 2016. We analyzed Thomson Reuters’ I/B/E/S dataset and looked at instances where analysts were unanimously bullish or bearish on a stock.  It turns...

The post Wall Street Consensus Trades Fell Apart in 2016: 16.2% Underperformance appeared first on Sentieo.

]]>
Following Consensus Trades worked in 2013 and 2014, and started to lose some money in 2015. After running the numbers, we were shocked to see this developing consensus underperformance trend accelerate by 1270bps for 2016.

We analyzed Thomson Reuters’ I/B/E/S dataset and looked at instances where analysts were unanimously bullish or bearish on a stock.  It turns out that analysts recommendations correlated strongly with share price performance.  However, there was one tiny caveat: the buys dramatically underperformed the sells in 2016.  The unanimous buys were up 4.5% while the unanimous sells were up 20.7% so a market neutral consensus portfolio lost ~16.2% last year.  It turns out that 2016 was a year where betting against the analyst herd paid off!

share-price-distribution-chart-02

The chart above shows the distribution in share price performance between both cohorts. (The bullish group had outliers up +1618% and +1189% that are not shown.) While the winners in both groups performed roughly the same, the losers in the bullish group fell more than the bearish group.

One major factor behind the underperformance was the bullishness surrounding small-cap development-stage pharma stocks.  For healthcare stocks (which were mostly small-cap pharma names in our cohorts), the number of consensus buys outnumbered the sells by a factor of 13.7X versus a baseline rate of 2.24X.

Quick methodology notes

We looked at stocks where either (A) every analyst on a stock had a buy rating or (B) every analyst had a non-buy rating as of Dec 17, 2015.  A stock with 100% holds and 0% sells would fall in the bearish group since it had 0 buy ratings.  Our look at analyst ratings basically considers holds to be a polite way for an investment bank analyst to put a sell rating on a stock.

Please take our data with a grain of salt.  This was a quick analysis where we used share price performance instead of total return.  Performance ignores dividends, corporate actions, survivorship bias, and stock borrow/lending fees.  Where price data was unavailable or had errors, some stocks were omitted.

Bullish on biotech

The unanimous buy cohort was quite bullish towards the healthcare sector.  315 out of the 1031 unanimous buys were for healthcare stocks, while 23 out of the 459 unanimous sells were for healthcare stocks.  That’s 30.55% of the buy cohort compared to 5.01% of the sell cohort.  This was a very unusual imbalance.

Also note that the fixation over biotech has been growing over the past few years.  Since Dec 2012, healthcare stocks have grown from 20.77% of the buy cohort to 30.55% of the Dec 2015 cohort.  In that same timeframe, healthcare representation in the sell cohort fell from 5.80% to 5.01%.  The next most imbalanced sector was technology, which made up 12.42% of the Dec 2015 buy cohort and 7.84% of the sell cohort.  Technology was not nearly as imbalanced as healthcare.

Keep in mind that the “healthcare” stocks in our cohorts aren’t the Pfizers and Mercks of the world.  Due to small sample sizes, small cap stocks are significantly over-represented.  Small caps are more likely to have a unanimous consensus especially if there is only 1 analyst rating a stock.  (We could reduce the sample size problem by requiring a minimum of 5 ratings.  This would widen the performance gap for 2016.)  So, the “healthcare” stocks in the cohorts are actually mostly small-cap pharma stocks.

A historical perspective

2016 was an unusual year where the unanimous sells strongly outperformed the unanimous buys.   That hasn’t always been the case.  Suppose that the strategy is to hold stocks for 1 year.  Cohorts are chosen based on the unanimous buys/sells from the middle of December of the prior year.  Here are the historical spreads between the unanimous sells and buys:

2016: 15.5% (unanimous sells outperformed buys; contrarianism paid off)
2015: 2.8%
2014: -3.4% (following the herd paid off)
2013: -2.5%

Note that some portion of the spread has been driven by sector exposure to smaller biotech stocks.  We can use the equal-weighted biotech ETF XBI as a proxy for biotech.  XBI’s share price performance in the past 4 years has been:

2016: -15.7%
2015: 13.0%
2014: 43.2%
2013: 48.1%

So, biotech exposure partially explains the performance gap but does not explain everything.  Suppose that we removed all healthcare stocks from the Dec 2015 cohort used for the 2016 performance figures.  This would reduce the spread from 15.5% to 7.1%.

There’s another method for controlling for differences in sector exposure.  We can adjust the performance figures by grouping the stocks into baskets based on sector.  Then, we can change the weights on the baskets in the unanimous sell cohort so that sector exposure matches the buys.  The historical spreads become:

2016: 7.6%
2015: -2.8%
2014: -3.9%
2013: -1.3%

Removing distortions caused by sector exposure shows that the spread has roughly averaged out to zero over the the past four years (with perhaps a slight edge to the buys).  However, the unanimous buys have been on the wrong side of predicting where sectors have been headed.  Over the past four years, the sells have outperformed the buys by about 1.6% annualized if sector-weighting differences are included.  Due to limitations with the methodology, reliable conclusions cannot be drawn about whether or not analyst recommendations can help predict stock prices.  As well, the numbers here likely do not generalize to all investment analysts polled by I/B/E/S.  Prestigious bulge-bracket investment banks, star analysts, and independent research providers (that do not seek investment banking business) are likely severely underrepresented in the cohorts.

Will biotech giveth or taketh away?

What we can safely conclude is that 2017 looks like yet another year where biotech will be a major player.  30.87% of unanimous buys (from Dec 15, 2015) were for healthcare stocks- slightly higher than the previous year where that number stood at 30.55%.  There seems to be a segment of the analyst community that is unusually exuberant about biotech stocks.  It will be interesting to see whether or not their optimism becomes reality.


Some interesting links

Barber, Lehavy, McNichols, and Trueman wrote a paper “Reassessing the returns to analysts’ stock recommendations” that examines the value of analyst recommendations from 1986-2001.  Their conclusion?

Even with their poor performance in 2000-2001, for the longer (16-year) 1986-2001 period, the most highly recommended stocks still generated significantly greater average annual market-adjusted returns than did those least favored (2.44 percent as compared with -9.94 percent).  These relative returns reflect favorably on the long-term value of analyst recommendations, as long as the 2000-01 results are simply an aberration that is unlikely to be repeated.

The post Wall Street Consensus Trades Fell Apart in 2016: 16.2% Underperformance appeared first on Sentieo.

]]>
https://sentieo.com/wall-street-consensus-trades-fell-apart-in-2016-16-2-underperformance/feed/ 0
Technical Thoughts: Sentieo’s Alexa Skill and the Three Fundamental Laws of Voice User Experience $AMZN $GOOGL $AAPL https://sentieo.com/technical-thoughts-sentieos-alexa-skill-and-the-three-fundamental-laws-of-voice-user-experience-amzn-googl/ https://sentieo.com/technical-thoughts-sentieos-alexa-skill-and-the-three-fundamental-laws-of-voice-user-experience-amzn-googl/#respond Fri, 30 Sep 2016 14:02:39 +0000 https://sentieostg.local/blog/?p=1267 Sentieo’s Alexa Skill is live! We present some thoughts from our technical team recapping our experiences for the benefit of those who are keen on considering the future of computer interfaces. For Voice User Interfaces (VUIs) to have any chance of success, the future direction of Voice User Experience (VUX) will be strongly tied to...

The post Technical Thoughts: Sentieo’s Alexa Skill and the Three Fundamental Laws of Voice User Experience $AMZN $GOOGL $AAPL appeared first on Sentieo.

]]>
Sentieo’s Alexa Skill is live! We present some thoughts from our technical team recapping our experiences for the benefit of those who are keen on considering the future of computer interfaces.

For Voice User Interfaces (VUIs) to have any chance of success, the future direction of Voice User Experience (VUX) will be strongly tied to physical, not software, constraints.

The three features of these will be:

1) At least 100 words per minute (wpm) input

2) close to 200wpm output

3) under 250ms response time.

We are nowhere close.

Voice User Experience

We have just updated the Sentieo Skill on the Alexa Skill Store where it now ranks among the best Finance skills (higher if you strip out all the bitcoin noise). We thought we might share a few thoughts on our experiences redesigning the Sentieo experience, which was first created for Desktop and then Mobile, for a radically different interface.

With the linguistic abstraction infrastructure finally in place to separate voice software engineering (executing specific intents with data and integrations) from language processing (a natural monopoly parsing general human speech to specific intent and vice versa), and, as a plus, supportive hardware, there will undoubtedly be a wealth of development with the ecosystem benefits deservingly accruing to Amazon. Ours was roughly the 2000th skill to hit the Alexa store, 3 months after it crossed the 1000 mark.

This, together with the recent attention on chatbots, has predictably prompted all sorts of manic speculation, including “the Death of the GUI”, but that discussion is premature until key issues in the development of the Voice User Interface are addressed. Simply put, apart from simple hands free convenience, we haven’t figured out where the VUI absolutely dominates. You see this when your bank gives you the option to “Speak to a human representative”.

In this very real sense, the VUI is a solution in search of a problem. We aren’t even very good at the solution yet: we are terrible at transcribing accents and abbreviations; context management and intent disambiguation is a mess; input mappings are naturally many-to-one while output tends toward one-to-one; and we haven’t even tried our hand at “nontextual” verbal data like voice recognition (multiple speakers), sarcasm, humor, and tone. And let’s not even talk about privacy issues in practical implementations.

There is a common implicit assumption that those problems go away as research and infrastructure in language processing improves, but in fact the meta problems endemic to voice software engineering are perhaps even harder to solve because they run into physical “laws”. Even if we do the voice equivalent of assuming a spherical frictionless cow, and assume every speech is perfectly translated to intent, there are still terminally intractable problems with the field of voice that, for want of a better term, we will call UI efficiency (although there are formal definitions of this).


Why We Want UI Efficiency

A quick oversimplified review of user interface history in the lens of efficiency:

  • We moved from punch cards to command lines because virtual punch cards were more quickly iterative and thus more efficient for input than physical punch cards. (As a bonus, they were less prone to corruption…)
  • We moved from command lines to graphical interfaces because inputting information in two dimensions is more efficient than inputting information in one. (As a bonus, they changed commands from memorized text to thoughtfully placed buttons, spawning an entire field of design.)
  • We moved from graphical interfaces to touch interfaces because it removed an unnatural translation — moving my hand on the x-y plane moves the cursor on the x-z plane — and is cognitively more efficient. (As a bonus, the lack of stylus or mouse helped get us mobile.)

You see where this is going, and what question we will have to answer in a post-touch world. Every iteration is more efficient and accompanied by an order of magnitude change in input friendliness. There is de minimis tradeoff and the new UI dominates in basically every metric each time.

Implications of the UI Efficiency framework

In this light, we understand that chatbots are really irrelevant as they represent two steps backward to command lines AND have language processing issues, creating a very high bar for truly structural UI efficiency. But they may be a fantastic test case to cost-less-ly and rapidly improve upon language processing and that is not worth nothing.

We have chosen here to stress inputs because visualization was pretty much always two dimensional from the outset. However, output efficiency is also likely to increase in relevance going forward as technologies improve in the audio and visual spheres.

Understanding that it is input efficiency that drives mass adoption and “killer apps” means that for the VUI to get anywhere we have to figure out 1) what exactly the efficiency improvement is and 2) what the step change benefit will be. Our view on 1) is obfuscated by pesky natural language issues and for 2) our best answer is hands-free and eyes-free interaction.

Ironically, it should be blindingly evident that one of the biggest drivers of benefit for 2) is about to go away. The biggest use case for voice interaction is while manually driving. This use case will diminish in direct proportion to the adoption rate of autonomous vehicles.

While we wait for a better 2), we are left with a thought experiment on 1): what is the upper limit on input and output efficiency in a VUI and what conditions must exist to get there? In other words, what is the ideal Voice User Experience if UI Efficiency alone dictated success or failure?


We are fully aware of the futility of pinning numbers on future unknowns but we’re going to just try so that we can have an idea of the magnitude of improvement.

VUX Law #1: Maximize Natural Input— VUI input speed must be at least 50% HIGHER than existing UI

Here are some facts to know (all rough estimates for average anglophones, easily searchable so left uncited):

  • The average writing speed is 25wpm
  • The average typing speed is 40wpm
  • The average talking speed is 100wpm

The important thing to note about these speeds is that unlike physical laws, it is as painful to go DOWN in speed as it is to go up. We absolutely CAN slow down our speech 60% for better machine comprehension. Is it great UX? Hell, no.

So not only is voice input potentially much faster than typing, it HAS to be much faster than typing. Incidentally, this means that voice software engineering will tend naturally toward machine learning since we can use the wealth of data to arrive at better outcomes than deterministic logic trees. But that’s nothing new.

VUX Law #2: Minimize Output Tradeoff— VUI output speed must be able to go up to around 200wpm, or 33% more efficient than regular listening

  • The average listening comprehension speed is 150wpm-200wpm.
  • The average reading speed is 250wpm

However there is ample evidence to suggest that our current average listening speeds are simply being dragged down by our average talking speed. Just take any podcast and ramp it up to 2x playback— you can still listen comfortably, and only the 3x-4x regions are where experience really starts to matter. At average talking speeds of 100wpm that works out to a 200wpm natural listening speed.

Why this is important is because we want to reduce as much as possible the tradeoff in listening vs reading, a key feature of every generational shift in UI (as discussed above).

VUX Law #3: Constrain by Conversational Constant— VUI I/O responsiveness must be under 250ms

More facts:

  • The “feeling of being instantaneous” barrier is 100ms
  • The global average time between two participants in a conversation is 200ms
  • The “flow of thought” barrier is 1,000ms
  • Alexa’s default timeout is 3000ms
  • My Alexa Skill’s average time to execute is 4000ms, relying on 2 slow APIs for data (could be optimized…)
  • The “attention” barrier is 10,000ms

It’s worth giving a full read of the fascinating study on “the conversational constant”:

Conversation analysts first started noticing the rapid-fire nature of spoken turns in the 1970s, but had neither interest in quantifying those gaps nor the tools to do so. Levinson had both. A few years ago, his team began recording videos of people casually talking in informal settings. “I went to people who were sitting outside on the patio and asked if it was okay to set up a video camera for a study,” says Tanya Stivers.

While she recorded Americans, her colleagues did the same around the world, for speakers of Italian, Dutch, Danish, Japanese, Korean, Lao, Akhoe Haiom (from Namibia), Yélî-Dnye (from Papua New Guinea), and Tzeltal (a Mayan language from Mexico). Despite the vastly different grammars of these ten tongues, and the equally vast cultural variations between their speakers, the researchers found more similarities than differences.

The typical gap was 200 milliseconds long, rising to 470 for the Danish speakers and falling to just 7 for the Japanese. So, yes, there’s some variation, but it’s pretty minuscule, especially when compared to cultural stereotypes. There are plenty of anecdotal reports of minute-long pauses in Scandinavian chat, and virtually simultaneous speech among New York Jews and Antiguan villagers. But Stivers and her colleagues saw none of that.

Reflections

It is striking how these conclusions, arrived at from a high level human-interaction point of view, come to the exact opposite of the design choices the Alexa/Echo team have made so far:

  • Fixing Alexa output voice at a plodding speed (easily fixable)
  • Advising everyone that the Amazon.Literal type only be used with short phrases
  • Not allowing compound intents and actions
  • Not natively supporting intent disambiguation
  • Allowing timeouts to go as high as 10,000 ms
  • I haven’t done the math but I wonder if making everything on the cloud makes things that much slower. At modern broadband transmission speeds this probably not a material concern.

No shade thrown on the Alexa team at all, but we think it likely that all these choices will have to be addressed and reversed over time due to the above VUX laws having to be satisfied to make it work. We are pretty interested in the idea of predicting and narrowing down voice queries to reduce response times as this matches what we do in real life.

Parting thought

Remember that use case we casually wrote off as if it was a done deal?

The biggest use case for voice interaction is while manually driving. This use case will diminish in direct proportion to the adoption rate of autonomous vehicles.

That’s probably going to take longer than anyone reading this would like to become a reality. Meanwhile, there’s another interesting use case on the rise — one where your eyes and hands are fully occupied with a need to still interact with the computer…

The post Technical Thoughts: Sentieo’s Alexa Skill and the Three Fundamental Laws of Voice User Experience $AMZN $GOOGL $AAPL appeared first on Sentieo.

]]>
https://sentieo.com/technical-thoughts-sentieos-alexa-skill-and-the-three-fundamental-laws-of-voice-user-experience-amzn-googl/feed/ 0
Oil & Gas Research the Smart Way with Sentieo $NFX $FANG $CLR $APC https://sentieo.com/oil-gas-research-the-smart-way-with-sentieo-nfx-fang-clr/ https://sentieo.com/oil-gas-research-the-smart-way-with-sentieo-nfx-fang-clr/#respond Tue, 05 Jul 2016 20:10:45 +0000 https://sentieostg.local/blog/?p=884 The O&G industry reports tons of data in both volume and detail—from drilling rig and pressure pumping data to well production info. Looking for and analyzing all of this information for your investment ideas is a very necessary but time consuming process. Designed by buysiders for buysiders, Sentieo is the best tool on the market for leveraging technology to rapidly compress your research...

The post Oil & Gas Research the Smart Way with Sentieo $NFX $FANG $CLR $APC appeared first on Sentieo.

]]>

The O&G industry reports tons of data in both volume and detail—from drilling rig and pressure pumping data to well production info. Looking for and analyzing all of this information for your investment ideas is a very necessary but time consuming process. Designed by buysiders for buysiders, Sentieo is the best tool on the market for leveraging technology to rapidly compress your research cycle and give you more time to generate true alpha insights.

In this post, I’m going to show you a glimpse into the world of oil & gas research using Sentieo—so that you can spend more time analyzing your the findings and try to come up with answers to questions such as:

Which E&P companies might be at risk of defaulting on their loan obligations?

Has an E&P operator you are following announced those new well results yet?

What would this company specific data would look if I plotted it against other metrics?

What are some ways I can use Sentieo to research industry trends?

What are companies are saying about break-even oil prices and well-economics?

How many drilled but uncompleted wells are in a company’s backlog?

Let’s get started.

Digging deep through SEC filings: Missed interest payments

Let’s assume you know nothing about the E&P space but want to take a look at companies that have missed interest payments. You can try running a simple search for ‘interest payment’ or ‘missed interest payments’, but this is likely to return too few or too many results. After doing a bit of reading I noticed that in most instances where a company mentions a missed interest payment, there will be some combination of the following words in the paragraph:

  • grace period
  • elect(ed)
  • interest payment

I ran a search asking Sentieo to return all results where these words were within 30 words of each other. These were the results:

I sifted through a couple of the documents and took a few highlights, labeling these highlights under the custom label Grace Period. These highlights were automatically tagged and stored in my Sentieo Notebook for easy recall. In this view, I am looking at three separate documents where I applied the Grace Period highlight.  You can use these names as a possible starting point on finding companies that might be running into liquidity issues or are at risk of default.


Tracking well results 

Energy investors that have an interest in the Anadarko Basin will probably be closely watching Newfield Exploration’s ($NFX) STACK infill spacing pilot which, if successful, could add to their inventory and provide NAV upside. The first of two planned pilots, the Chlouber, is currently underway with a second pilot scheduled to be spud once the appropriate location is found. Both pilots should be online by the end of 2016.

You can easily keep tabs on the results of this pilot by setting up a Sentieo keyword alert.  Sentieo will send you an e-mail and/or in-app notification when there are new results for your search so that you don’t have to keep running the same search time and time again.

Let’s explore this by setting up a Sentieo alert for mentions of the Chlouber pilot:

Step 1) Pick your search term. You are allowed more than one word, though usually one word will do.

Step 2) Add any filters you would like to apply to your search such as a company specific ticker, sector filter, etc. In this case, I am including NFX as my ticker and Chlouber as my search query.

Step 3) Click on the floppy disk icon to the right of the query search bar

Step 4) Select your alert preferences.  Click Desktop and/or E-mail and then hit Save.

The next time Chlouber is mentioned in any press release, presentation, 8-k, 10-K, 10-Q, etc, I will immediately be notified by Sentieo.  This can be especially useful during earnings season when there are multiple press releases and earnings calls at the same time.


Creating charts using data extracted from filings

Let’s say you are looking at a 10-K or 10-Q and come across some interesting data inside of a table you would like to further explore. Thanks to our table extraction technology, you can quickly send any data from a table directly to our data visualization tool (Plotter) and layer in additional data-sets (such as your own .csv file) or traditional financial metrics.

Given the state of the current commodity price environment, the health and quality of a company’s balance sheet has become more of an important factor when looking at E&P companies, maybe even more so than production growth and asset quality (this was more of a factor in 2015).

Pick a handful of the top-performing E&P’s in 2015 and you’ll likely notice that a lot of the top performers had low net debt/EBITDA multiples vs their under-performing peers.

In this example below, we have a couple of different data sets being displayed all at once.

I used the Time Series function to crawl though all of Diamondback’s previous filings and create a composite table of their historical average price of oil per barrel (dashed red line) versus WTI crude oil prices (dashed blue line). I then added Price/Cashflow (pink line) and EV/EBITDA multiples (green line). Finally, I threw in stock price (yellow dotted line) and created my own hybrid series, Debt/Shareholder Equity (solid blue line), by dividing Debt over Shareholder Equity.

In this one chart, we can visualize the following:

Average selling price of oil trending down with the price of WTI

Leverage trending down over time

Stock price, Price/Cash Flow, and EV/EBITDA shooting up

The best part about this is that you can save any chart as a template which will allow you recreate this graph for other companies by simply entering the ticker. The sky is the limit on the number and different types of data sets you overlay with Plotter.

Researching industry trends

Document search is another great way to find out what companies and analysts are talking about in the E&P space. By performing an open ended search (not specifying a ticker), you can find out what all companies in the space are saying about your particular search term. I’ve included some relevant searches an E&P analyst might be interested in running below:

Theme: Companies continuing to reveal improving capital efficiency and the ability to do more with less (production beats, lower than expected operating costs, improving well performance, declining q/q capital spending).
Keywords: Further efficiency gains, cost savings, more with less, what inning, operational gains

Theme: Companies delivering strong performance on costs which helps contribute to EPS beats and help drive stronger capital efficiency. There are many factors contributing to the substantial improvement in LOE costs (reduced water handling costs, lower fuel and electricity costs, better uptime well performance, improved base production decline management, and reduced workover activity)
Keywords: LOE, reduced costs, cost savings

Theme: Future E&P revenues are a function of the selling price of the various hydrocarbon streams based on market pricing (before taking the effect of hedging into account). Everyone from analysts to banks to E&P’s have different assumptions of where commodity prices will be in the future, so it is important to understand what prices management is assuming when they talk about any forward outlook.
Keywords: Price deck 2017, strip pricing

Theme: Looking at the unhedged outspend for 2017 on strip pricing can give an estimate of the sustainability of the investment program in a lower for longer environment. Operators that are far outspending unhedged cash flow could be more inclined to revisit capital programs and investment levels. Companies with substantial hedge books to lean on are in a position of strength (relative), allowing for a more aggressive approach to operating and planning in this current lower environment.
Keywords: Hedge book

Theme: Duration of the current lower oil price environment continues to be the million-dollar question, not how low oil prices will fall or whether they’ll recover to a level required to grow supply again.
Keywords: non-OPEC supply growth, OPEC production, demand growth, supply growth, rebalance market

Theme: Many believe the longer this lower for longer oil price environment lasts, the more pressure there will be for companies to consolidate.
Keywords: M&A, consolidate, bid ask spread, acquisition, divestiture, non-core

Theme: Big gas oversupply situation
Keywords: Marcellus curtailment, Utica curtailment, natural gas storage, winter weather, Utica potential

Theme: Natural gas and NGL beats being a positive data point to many analysts, MAYBE suggesting that gas to oil ratios (GORs) are moving higher which could be a positive sign for the oil macro.
Keywords: GOR, gas oil ratio, gas beat, NGL beat

Break-even costs well costs and economics

The cost of producing a barrel of oil varies greatly across the country, creating a bifurcation between economic and uneconomic basins. Besides production and well performance, key inputs that impact a producer’s expected rate of return and analysis on whether or not drilling the well makes sense include factors such as drilling cost, completion costs, transportation cost, royalty fees, etc.

In this example, Continental Resources ($CLR) uses well break-even data provided by Evercore ISI. CLR’s top tier plays break even and generate an after-tax IRR at oil prices as low as $35.

 

Anadarko Petroleum ($APC), an independent explorer, recently commented that the current level of break-even oil prices is around recent oil prices (~$45) for a U.S. onshore project.:

You can also use document search to help collect relevant economic input data for your models such as production information, well costs, and other relevant items. In this example, I ran a search for well economics or well costs:


What is a company is saying about their drilled but uncompleted well (DUC) backlog?

Let’s take a look at Continental Resources ($CLR) and see what they have said about their drilled but uncompleted backlog. A very simplified explanation of how oil and gas is pumped out of the ground can be explained in two steps:

1)Drill a hole into the Earth with a drilling rig

2)Blast a bunch of water and sand into said hole (fracking) to create pore space through which hydrocarbons can flow through at a faster rate.

The steep fall of oil prices has caused many cash-strapped operators to post-pone step 2 described above because they are not incentivized to spend lots of money on completing these wells and bring online new production into a low commodity price environment. As a result, we have a buildup of these uncompleted wells known as DUC (drilled but uncompleted well). A rebound in oil prices could mean the possibility that a large number of DUCs are all brought online at once, causing a potential uptick in supply that could create a ceiling on oil prices.

Below, I have run a search for the words drilled but uncompleted or DUC. You can exclude the ticker if you want to see what all players in the industry are saying about these DUCs, but in this case I have gone ahead and limited the search to only CLR. These first 5 documents contained information which I thought was relevant, so I went ahead and highlighted excerpts from each document and tagged these passages with a DUCS label.

All of these highlights are then sent to your Sentieo Notebook, where they are automatically tagged and labeled appropriately. Here, I have applied a filter so that I can look at all of my CLR notes that reference DUCs

Hopefully some of these tips help you speed up your existing day-to-day research process, or have given you some ideas on new ways to use Sentieo for your E&P research. If you would like to speak with any of our Product Managers about anything you’ve read in this post, please send an e-mail to success@sentieo.com

To see how Sentieo can help with your earnings prep, simply go to Sentieo.com and sign up for a free trial. If you would like to continually receive content related to topics of interest in the markets, don’t forget to subscribe to the Sentieo Blog so that we can notify you of new posts by e-mail. 

New call-to-action

The post Oil & Gas Research the Smart Way with Sentieo $NFX $FANG $CLR $APC appeared first on Sentieo.

]]>
https://sentieo.com/oil-gas-research-the-smart-way-with-sentieo-nfx-fang-clr/feed/ 0
$PLATED – Using Mosaic to ballpark unlisted company financials https://sentieo.com/plated-using-mosaic-to-ballpark-unlisted-company-financials/ https://sentieo.com/plated-using-mosaic-to-ballpark-unlisted-company-financials/#respond Wed, 11 May 2016 07:47:49 +0000 https://sentieostg.local/blog/?p=705 This is a companion post to walk through our methodology for our post on WFM. For full transparency, we wanted to go through in detail our math on the subscription meal-kit industry, where all players are unlisted. This is where Mosaic starts to come in handy to estimate real numbers. Guesstimating Blue Apron as the...

The post $PLATED – Using Mosaic to ballpark unlisted company financials appeared first on Sentieo.

]]>
This is a companion post to walk through our methodology for our post on WFM.

For full transparency, we wanted to go through in detail our math on the subscription meal-kit industry, where all players are unlisted. This is where Mosaic starts to come in handy to estimate real numbers.

Guesstimating Blue Apron as the anchor

The latest delivery number for Blue Apron is 8 million meals/month (more than double the June 2015 rate of 3 million meals/month) That Fortune article equates it to a $960m run rate on an ASP of $10, but we are skeptical as customers never pay the full list price with the myriad referral bonuses and other promotions common in the business. We reckon the real number is closer to $9, which puts Blue Apron on a still-impressive $720m revenue run rate.

Hellofresh as the other end of the vector

Hellofresh’s 2015 pre-IPO numbers were $290m globally, of which 60% is US revenues. That’s $150m for calendar 2015, over which the company quadrupled, so they exited with closer to $375m run rate in US revenues.

Using Mosaic and two known points to triangulate

We don’t know much about Plated and the other smaller players in the subscription kit space, but fortunately, they all run similar business models in the most transparent traffic market in the world. We used Mosaic to pull together three independent reads on Plated’s traffic and got extremely close results:

Using the market share data and known revenue numbers, we can put an estimate on Plated of about $135m in annual run rate:

wfmblog18

This sounds in the ballpark considering it was pinned at $100m from this Inc article in June 2015. The closeness of fit of a linear curve shows that the revenues are strongly tied to traffic acquisition, but also that there is no clear barrier to entry or topline benefit to scale:

plated2

Getting a Handle on Industry Growth

So we now know that the industry is doing roughly $1.4bn/yr today but that number is meaningless in isolation because the rate of growth presents a constantly moving target, something traditional investors in retail and staples aren’t used to. We need get a handle on growth as well. Wouldn’t it be nice to have a platform where you could easily pull up that data from multiple sources?

Screen Shot 2016-07-16 at 5.17.42 PM 1

Plated’s March peak in traffic is a one-off bump thanks to their deal with Mark Cuban on Shark Tank, but we reckon that the industry has roughly doubled year-on-year, which means that there was at least $700m in incremental revenues over 2015, backend weighted. What is a reasonable estimate for forward projections?

Forecasting the future is more art than science. If the industry saw 100% growth in 2015, is it fair to say it will grow 100% again in 2016? On the one hand, you are starting off a higher base and are going from early adopters to followers. On the other hand, you have better funding and cash flow and scale in everything from customer base to marketing campaigns to logistics. It is hard to take the over/under. We think a fair conservative estimate is $800m in incremental revenues, which is a slight acceleration year on year in dollar terms but actually a sizable deceleration percentage-wise, to +60% forward growth. We think the risk to this number is higher rather than lower as more funding and entrants like Amazon come into the space.

Calculating the Comp Sensitivity for WFM

1% of WFM’s 2015 revenues of $15bn is $150m, and given that WFM is not closing stores in any meaningful scale, this is a directly applicable number to calculate impact on comparable store sales. Every $150m taken away from WFM is 1% less in comps.

However, WFM does not solely bear the brunt of the disruption. A number of WFM’s peers, from Kroger to Sprouts Farmers Market, have all called out or commented on the potential impact on subscription meal-kit services. While WFM is a major player, it would be unfair to attribute the full amount of the disruption to them. Since WFM is approximately 25% of grocery industry revenues, we think somewhere in that ballpark would be appropriate, though pricing and demographic characteristics make WFM more susceptible to disruption than the general industry.

Taking all of the above into consideration, we are able to stress test a simple model of WFM comp sensitivity to the subscription meal-kit services, which is how we finally arrive at our Comp Sensitivity table used in the main WFM blog post.

wfmblog14

 

New call-to-action

The post $PLATED – Using Mosaic to ballpark unlisted company financials appeared first on Sentieo.

]]>
https://sentieo.com/plated-using-mosaic-to-ballpark-unlisted-company-financials/feed/ 0