blue-ghost-765
06/02/2022, 7:03 PMfull-island-88199
06/07/2022, 2:56 PMexperiment_viewed
to first experiment_viewed + window_length
2. From the last experiment_viewed
to last experiment_viewed + window_length
3. From the first experiment_viewed
to last experiment_viewed + window_length
4. From any experiment_viewed
event to experiment_viewed + window_length
(multiple conversion windows)
In GB it is currently 1, but in many use cases we would prefer 4 or at least 3. Is this something you have considered, or have thoughts about?prehistoric-summer-28496
06/08/2022, 1:14 PM'{{ startDate }}'
and '{{ endDate }}'
? I am asking because for now, in use cases where conversion window should be infinity, the '{{ endDate }}'
parameter gets filled by (real end date taken from experiment configuration + conversion window
) and this leads to changing results after an experiment is stopped. Thus:
1. Having '{{ conversionWindow }}'
parameter would allow to nicely edit metrics.
2. Also, it is not obvious that '{{ endDate }}'
is actually experiment end date + conversion window š It gets clear after looking an rendered queries only.prehistoric-summer-28496
06/09/2022, 7:45 AM'{{ endDate }}'
parameter
(1) does not actually correspond the end date of the experiment in stopped experiments and current date in running experiments as one could logically assume and
(2) corresponds to experiment_end_date + conversion window
and current_date + 2 days
in stopped and running experiments respectively,
I would like to suggest to introduce '{{conversionWindow}}')
and '{{buffer}}')
parameters, so these two can be simply added to {{ endDate }}'
which would correspond to the real end date of the experiment (or current date for running experiments).chilly-umbrella-90459
06/13/2022, 2:53 PMchilly-umbrella-90459
06/13/2022, 2:57 PMchilly-umbrella-90459
06/13/2022, 8:58 PMchilly-umbrella-90459
06/14/2022, 9:09 PMchilly-umbrella-90459
06/15/2022, 8:08 AMpolite-pillow-33171
06/16/2022, 6:55 PMlively-belgium-36533
06/17/2022, 9:47 AMforceVariation
, and then i use it useFeature
, i found it returns me the default value, not the forced variation value, then i check the source code.
i found in line 353: here the res.inExperiment
is false. is this a bug? or how can i get the forced variation value?bulky-oxygen-41438
06/23/2022, 12:42 PMhidden
on the space of the items in the nav, instead of the whole nav.
The list in the picture should show more than 10 items, but only showing twofresh-football-47124
ambitious-activity-2609
06/28/2022, 3:12 PMpolite-pillow-33171
07/06/2022, 6:21 PMadamant-exabyte-53836
07/07/2022, 12:50 PMWORKDIR /usr/local/src/app
⢠RUN apt-get update && apt-get install -y wget gnupg2 && echo "deb <https://deb.nodesource.com/node_14.x> buster main" > /etc/apt/sources.list.d/nodesource.list && wget -qO- <https://deb.nodesource.com/gpgkey/nodesource.gpg.key> | apt-key add - && echo "deb <https://dl.yarnpkg.com/debian/> stable main" > /etc/apt/sources.list.d/yarn.list && wget -qO- <https://dl.yarnpkg.com/debian/pubkey.gpg> | apt-key add - && apt-get update && apt-get install -yqq nodejs=$(apt-cache show nodejs|grep Version|grep nodesource|cut -c 10-) yarn && apt-get clean && rm -rf /var/lib/apt/lists/*
full-island-88199
07/07/2022, 1:10 PMgray-crayon-34592
07/08/2022, 8:57 AMgray-crayon-34592
07/08/2022, 9:01 AMbusy-horse-73824
07/08/2022, 9:10 AMbusy-horse-73824
07/08/2022, 9:52 AMcourse_page_viewed
event, which has course_key
, chapter_key
and page_id
fields. If we're testing whether a content change helps people progress further, we might want to see whether more people make it to the end of the course, or to the next chapter, or whatever.
Currently we have to make an individual metric for every single thing we might want to look at. It'd be better if we could have a base course_page_viewed
metric, maybe with some minimal initial filtering, then optionally filter on extra fields
Or as a rough equivalent, defining a metric with specific parameters which have to be filled in when selecting it (not as powerful, but maybe more compatible with e.g. SQL-backed metrics)future-candle-10380
07/08/2022, 6:27 PMScreenshot - cloud
and Screenshot - local
you can see the data is exactly the same. In fact itās taking from the same mixpanel production data store but the results are different. In cloud Chance to beat control is 50% and there is no Percent change graph. In local version itās 36% with nice graph.
In files Screenshot - query cloud
and Screenshot - query local
you can see that the query result data is identical.
In files Screenshot - metric cloud
and Screenshot - metric local
you can see that the metric configuration is identical
In files Screenshot - experiment cloud
and Screenshot - experiment local
you can see that the experiment configuration is identical
When I export the transformed data that are going into the gbstats calculation itās the same as in the query:
{
"var_id_map": {
"0": 0,
"1": 1
},
"var_names": [
"Small",
"Big"
],
"weights": [
0.5,
0.5
],
"type": "count",
"ignore_nulls": false,
"inverse": false,
"max_dimensions": 20,
"rows": [
{
"users": 16,
"count": 13,
"mean": 5.769230769230769,
"stddev": 7.148467787993196,
"dimension": "",
"variation": "0"
},
{
"users": 10,
"count": 8,
"mean": 5,
"stddev": 2.7386127875258306,
"dimension": "",
"variation": "1"
}
]
}
And when I run it through jupiter notebook I get exactly the same results (chanceToWin: 0.36) as in local version (see `Screenshot - notebook`:
{
"unknownVariations": [],
"dimensions": [
{
"dimension": "",
"srm": 0.23931654122149193,
"variations": [
{
"cr": 4.6875,
"value": 75,
"users": 16,
"stats": {
"users": 16,
"count": 13,
"stddev": 6.803611336557586,
"mean": 4.6875
}
},
{
"cr": 4,
"value": 40,
"users": 10,
"expected": -0.1466666666666666,
"chanceToWin": 0.3600440835131542,
"uplift": {
"dist": "lognormal",
"mean": -0.15860503017663863,
"stddev": 0.4426092654556529
},
"ci": [
-0.6415991584084633,
1.031741260830155
],
"risk": [
0.4933414277382564,
1.1808414277382564
],
"stats": {
"users": 10,
"count": 8,
"stddev": 3.2058973436118907,
"mean": 4
}
}
]
}
]
}
I canāt debug the cloud version but my suspicion is that the calculation is for some reason different than on local version and I am trying to figure out why.
Local commit: 16303dfba972b805bc3fcc5e6e6514cbf21a2209
busy-horse-73824
07/11/2022, 12:29 PMancient-receptionist-63597
07/13/2022, 6:40 PMhelpful-hydrogen-62495
07/22/2022, 3:55 PMeager-rainbow-87882
07/22/2022, 10:26 PMISODate
on feature conditions.
Long history: We are going to store some ISO date strings as part of our attributes and we need to use force conditions based on date ranges, like, for example forced enable a feature for users created before an ISODate('2021-01-01')
. But if you already support specifying rules based on date ranges in some different way, please point me out the documentation about it. But the solution need to be dynamic, not fixed in our code (so doing the check in our code to assign the user to a specific boolean attribute is not an option for our use case).few-memory-16563
07/28/2022, 4:26 PMbusy-horse-73824
08/03/2022, 3:05 PMbusy-horse-73824
08/03/2022, 3:06 PMbusy-horse-73824
08/03/2022, 3:05 PM