Closes #10670 - Update issue templates.

fennec/production
Chenxia Liu 4 years ago committed by liuche
parent a547946dfc
commit 99ae855835

@ -6,8 +6,10 @@ labels: "🌟 feature request"
assignees: ''
---
### Why/User Benefit/User Problem
### What/Requirements
### What is the user problem or growth opportunity you want to see solved?
### How do you know that this problem exists today? Why is this important?
### Who will benefit from it?
### Acceptance Criteria (how do I know when Im done?)

@ -1,81 +0,0 @@
---
name: "\U0001F469\U0001F52C A/B Experiment Request"
about: Template to run and define an A/B experiment
title: ''
labels: ''
assignees: ''
---
## Meta Data
(optional / if needed or relevant)
Links to past feature documents. Past issues/bugs that might provide additional context.
Links to dashboards or metrics for which the following document will be based on.
## Problem Summary
Talk about current goals of the system and how they aren't being met. What outcomes are we not delivering? This can also include an explicit request for improvement that doesn't dictate a specific or concrete solution.
## What user problem(s) are we trying to resolve?
[List of prioritized user stories using the syntax “So that … as a … I want ….”]
### Assumptions (optional)
This is where you talk about what you assume to be true. This could include assumptions around what users are doing, errors, gaps, etc., based on anecdotes, opinions, testing, or data.
## Outcomes
What are the outcomes you want to achieve? What is the success criteria?
## Hypothesis
A high level hypothesis of how the feature you're proposing is going help us achieve the outcomes listed above. I recommend this be a sentence of the form:
If we (do this/build this/create this experiment) for (these users), then we will see (this outcome) because we have observed (this) via (data source, UR, survey).
## Metrics
- What is the primary success metric to confirm hypothesis?
- Are there secondary success metrics which could potentially prevent this feature from rolling out?
- How will these metrics be measured? (tool, data source, visualization, etc.) Make sure to confirm that these are confirmed before releasing.
Please provide sample artifact graphs here.
## Detailed design
This is the bulk of the RFC. Explain in enough detail to try to make it readable to someone outside of the team (other PMs, executives, etc) or for someone joining the team.
An additional goal is to reduce any doubt within the interpretation of metrics we might collect.This should get into specifics and corner-cases, and include examples of how the feature is used.
### Original Version (Present Day)
What is current situation in regards to this feature? How does it currently work? No need to go in as much detail as the suggested change but just enough to provide contrast and more context. Screenshots and user flow can often be enough.
[Add screenshots]
### Variation A
Provide details of change. If this is one of multiple variations, why do we think this change will make the better improvement. Include:
Screenshots with appropriate explanation
User flow
### Variation B (if necessary)
(Same details as variation A)
## Hypothetical Implementation Plan
Unresolved questions and risks (optional)
What parts of the design are still TBD?
## Results
- If we are developing a hypothesis and defining success metrics, we need to log them here.
- If metrics leave room to interpretation, define them. (e.g. when are they tracked, how, etc)
- Include screenshots of result graphs or data tables
- These results will likely help us develop new hypotheses.
## Conclusion
Was our hypothesis true or false and why?
Our hypothesis was true because we observed... [e.g. a 15% increase in account signup completions].
We should also address secondary metrics here:
We also observed during this test that… [e.g. we had an increase in single device signups]
## Next Steps
There no point having a conclusion if you dont have take-aways with next steps.
Are we releasing? Are we making changes?

@ -1,24 +0,0 @@
---
name: "\U0001F535 Epic (Meta Feature)"
about: Create an Epic (Meta Feature)
title: "[Meta]"
labels: ''
assignees: ''
---
### Why/User Benefit/User Problem
- Description of Feature
- Add relevant info/research related to this feature
- Immediate task: Convert to epic and move to appropriate milestone
### Acceptance Criteria (Added by PM. For EPM to track when a Meta feature is done)
-UX completed
-User stories completed
-Strings written and approved
-QA completed
-Localization done
### What / Requirements (Added by PM and Eng Manager)
-UX Designs (Immediate task: Assign UX issue to this epic)
-User stories (to be created by PM)
-List dependencies on other issues/teams etc.

@ -0,0 +1,24 @@
---
name: "\U000026CF Investigative Spike"
about: Create an investigation spike
title: "[Spike]"
---
## Title
Brief description of what needs to be investigated, including the User story for which the spike is needed.
## Description
Description of what is being investigated, including:
Method of investigation (engineering research, prototype, etc.
Boundaries of investigation (time box to x hours, does not include UX, etc.)
## Deliverables
Description of deliverables, including:
Documentation of investigation results (within the spike ticket, or linked to it), including:
Findings
Recommendations
List of possible user stories to implement recommendations, including estimates
Next Steps
Reach out to Product to go over results of investigation.

@ -7,28 +7,15 @@ assignees: ''
---
Owner: Product Manager
#### Description & Product Manager / Data Scientist User Story
- As a product owner, I want to know if people type use feature X so I can...
#### Hypothesis
- We believe this feature is useful to users, and successful when
...
#### What questions will you answer with this data?
#### Why does Mozilla need to answer these questions? Are there benefits for users? Do we need this information to address product or business requirements?
#### What probes (suggested, if applicable)
-
### Dependencies (Added by PM and Eng)
### Acceptance Criteria (Added by PM)
- Event pings can be queried via re:dash
- We are sending telemetry events for the actions listed in the requirements
- We have documented the telemetry
- We have asked a data steward to [review](https://github.com/mozilla/data-review/blob/master/request.md) the telemetry
- NOT an AC: Data science to create dashboard or further graphs (this will be a separate issue, this issue is only about hooking up the events described and that they can be queried in re-dash)
## Description & Product Manager / Data Scientist User Story
## What questions will you answer with this data?
## Acceptance Criteria
- [ ] ENG files a [DS JIRA](https://jira.mozilla.com/projects/DO/issues/DO-228?filter=allopenissues) request outlining their methodology.
- [ ] DS sign off on instrumentation methodology addressing product questions.
- [ ] Event pings can be queried via re:dash
- [ ] Event pings can be queried via amplitude
- [ ] We are sending telemetry events for the actions listed in the requirements
- [ ] We have documented the telemetry
- [ ] We have asked a data steward to [review](https://github.com/mozilla/data-review/blob/master/request.md) the telemetry

@ -7,7 +7,7 @@ assignees: ''
---
## User Story
- As a user, I want … so I can do …
- As a user, I want … so I can do … (keep it problem-centric)
## Dependencies
- List dependencies on other issues/teams etc.

Loading…
Cancel
Save