SteadyOptions is an options trading forum where you can find solutions from top options traders. Join Us!

We’ve all been there… researching options strategies and unable to find the answers we’re looking for. SteadyOptions has your solution.

Leaderboard


Popular Content

Showing content with the highest reputation on 08/21/2013 in all areas

  1. 1 point
    I thought some of the smart people on here might find this interesting: http://www.datatime.eu/public/gbot/ It's a public project to test algorithms and fully automated trading strategies with a trading platform (robot) and there's a client for (only) Interactive Brokers TWS or Gateway. It's really for fund managers and large investors, but we can download it free for evaluation purposes with a paper trading account, as they're looking for people to test the platform and concepts. Primarily for ETFs and futures but options seem to be fully integrated into it as well. Anyway, there's built-in backtesting, forward testing, and some really unique features - "extracting profit from the price curve by player superposition, and overlaying multiple "dynamic probabilistic cage" or "order clouds" from which the price curve cannot easily "escape" without being scalped. This is implemented through an overlay of multiple strategic layers, each one working with a multi-agent logic and hedging each other (player superposition)." Obviously not for the novice or casual investor but maybe some of you guys here would like to take it for a test drive
  2. 1 point
    Samer - let me first say thank you for sharing your analysis. I have learned a lot by studying it. You have taken the ratio (IV - HV) / IM (I will refer to this as SQ - Samer Quotient) and regressed this measure against median returns for 4 categories of trades ranging from losers to winners. The R-squared was 98% (pretty amazing). The SQ sure seems to be telling us something important. This was certainly exciting to see but something strange happened when trying to translate these categorical medians into concrete trading rules. It turns out the median of the "worst" category (median SQ 2.08) seems to be the close to the best trading rule (average 3.5% return vs. 2.06 best of 3.8% return), and the median of the best category (median SQ 1.42) seems to be a relatively poor trading rule (average 2.4% return). By trading rule I mean "if the SQ is less than X, then do the trade, otherwise skip the trade." Of course I understand that average return is not the end of the story (you have to take capital availability into account) but it is informative nonetheless. See the attached spreadsheet for details. The "trading rule" sheet builds on your analysis. I used excel "data tables" to test the effects of numerous trading rule thresholds. You can see that the numbers jump around a lot (suggesting a sample size too small to infer from) until you get to a SQ of 2.0-2.5. 2.06 is the optimal threshold (from an AVERAGE return perspective) for this sample. If you look at the numbers further (see the "profitable" column) the trades seem profitable enough to be worthwhile all the way up to SQ's of 3.5 - 4.0. Long story short, this is what IMHO I think we can infer from this analysis of this sample: We should be leery of trades with SQ's in excess of 3.5 - 4.0 The SQ is clearly a powerful metric that warrants further study as our sample dataset grows Thanks again Samer for sharing your knowledge with us! This is just my opinion... looking forward to hearing what others think. p.s. Kim - is there a way to configure the site so we can upload *.xlsx files. Currently we have to zip the file before the site will let us upload it. Gary SO Samer Analysis.zip
  3. 1 point
    Awesome post Mikael thanks for sharing. I want to read all of Augen's books and it drives me crazy not having the time ... PaulCao I'll take a stab at the math question: It looks like what this is doing is converting a population stdev to a sample stdev. When you calculate the stdev of a population you take the sum of the squared differences from the mean, divide by "length", and take the square root. However, if you are inferring the population stdev from a sample, you have to divide by "length - 1" to get an "unbiased estimator" of the population stdev. The reason for this is to adjust for a fairly esoteric mathematical concept known as the "degree of freedom" taken up by using the sample mean as an estimator for the population mean before computing the sample stdev. I did a google on ToS's stdev function and it does appear to be using the population version, so this adjustment appears to be correct. Here's a link to the formula for the ToS version: http://demo.thinkorswim.com/manual/dark/thinkscript/reference/Functions/Statistical/StDev.html. Note that in Excel, the "default" stdev function is the sample version, and you have to use stdevp to get the population version.
This leaderboard is set to New York/GMT-04:00