****HOWTO: The Reasoning and Learning Lab**** The following topics were discussed over a series of informal discussions as part of the regular meetings of the Reasoning and Learning Lab. The hope is that in reading this guide, you will have a better sense of the lessons learned from experienced researchers. While much of the advice comes in the form of best practices, you will see hints of what matters at the level of culture. [toc] # On Research ## Be curious Throughout your research, it's important that you focus on work which is important to you. You may not know what is important at the outset, so don't become cliquey. That is to say, you should be willing to share your ideas and be open. At the outset, it's important not to block yourself from new ideas and give yourself the time to explore your interests. The ideas which are important to you may not be important to others. Paul Graham discusses this in [the Bus Ticket Theory of Genius](http://paulgraham.com/genius.html). In particular, Graham states: > If I had to put the recipe for genius into one sentence, that might be it: to have a disinterested obsession with something that matters. Richard Hamming [famously shared with his students ](https://www.cs.virginia.edu/~robins/YouAndYourResearch.html): >What are the important problems of your field? ... and ... >If what you are doing is not important, and if you don't think it is going to lead to something important, why are you working on it? When moving from topic, to questions, to answers, you will naturally incorporate assumptions. These assumptions might seem trivial or silly. By writing them down, you might open up new areas of inquiry. As your ideas begin to form, you will have questions. ## Be precise As you learn about an area, you should practice writing down your questions in a form that is answerable. Ideally as you learn you will be able to produce answerable questions empirically, theoretically or both. When you design experiments, it's important to have clear _quantitative_ statements which follow directly from your initial conjecture. Working on research is hard work. Since you are part of the lab, _you are already good at what you do_, and you are deeply curious. Nuances take time to appreciate. One way that helps is by reading a lot of papers (1-4 papers every day). One researcher said that their "read abstracts" to "read full paper" ratio was **20 to 1**. Reading papers will also help you bridge your own ideas. As you read, you will have new questions. Write them down and try and connect them to larger questions. ## Draw a map As you learn more and more about an area, it helps to visualize what you are learning about. Try zooming into the details and then connecting them back out to the big picture. If you get stuck, try alternating between your hypothesis, theory and empirical framework. One member of the lab described their methodology like so: ![](https://i.imgur.com/FsNMF8D.png) They characterized the research activity as zooming in and out. ![](https://i.imgur.com/EIGyX8e.png) ## But is it research? Once you start to familiarize yourself with area, you may find yourself imagining what the results will look like, or how the plots might read. Ask yourself: if you know what the results will be, then is it really research? ## Your network You are part of a support network: - your friends - other colleagues in the lab - your supervisor(s) - internal conference reviewers Research is a collaborative endeavor, exchanging with and testing your ideas on others is a valuable practice. When you begin studying a topic, things will be easier if you and your supervisor overlap considerably. As time goes on, you will naturally find your own way. As one senior researcher said: > It's a good sign if you start disagreeing with your supervisor by the end of your PhD. ## More reading - [The Bus Ticket Theory of Genius](http://paulgraham.com/genius.html) - [You and your research](https://www.cs.virginia.edu/~robins/YouAndYourResearch.html) or on [YouTube](https://www.youtube.com/watch?v=a1zDuOPkMSw) - [Research methodology slides](https://docs.google.com/presentation/d/1mpExpJVWfGrfQvQ6xdvH8AGL44AU3ruPdEanKk8FvYM/edit#slide=id.ga2f816f8dc_0_11) # On Writing Sharing your research is a big part of the work. The best work is written and then re-written many times. Therefore start the drafting process _at least_ a month before a conference deadline. ## The process of writing Some people benefit from time tracking systems (like [the Pomodoro method](https://francescocirillo.com/pages/pomodoro-technique)). A member of the lab noted that the average experimental paper took ~50 hours of writing and that theory papers took 60 hours. Their thesis took 120 hours. A large grant averaged 9 hours of writing. These estimates were *after* the results were realized. Writing an outline is an important first step. Once you have the big idea, break it down into sections and paragraphs. Each paragraph can begin with a list of bullet points explaining the paragraph. Experimental results and figures are usually easier to write, but should not be overlooked since they are the first things people will look at. Related work is often written at the end and will naturally refine parts of the introduction. Spend at least 50% of your total time drafting in editing your work. ## Preparing theoretical contributions First make sure what you are saying is true. While errors do happen, they should be avoided at all cost. When editing work, oftentimes errors can crop up in lemmas which are buried in the core result. Check everything. Remember that what you are trying to do is share your results with a broader community. Be clear and concise and provide all the information and background a reader might need. Be rigorous. You cannot be clear or communicate truth if your results are not rigorously presented. Don't rely too much on inuition and let the math speak for itself. One member of the lab shared that they will have hundreds of pages of notes to review and work through before typing up a result in LaTeX. Instead of focusing on a specific deadline, focus on the idea and build it up first. Don't assume the reader knows what you are talking about. Don't assume they have read other work in your subfield. ## Avoid notation and acronym soup Unexplained notation is unacceptable. One way to avoid these issues is to practice giving talks in small groups of friends without notes. It is a mental exercise so that you can be independent of all the heavy machinery in formulas and rely more on your own memory and understanding. The clarity in notation. Keep it simple. You should feel pain whenever you add a subscript and avoid all but the most common acronyms where possible. ## Writing for Conferences Don't lose the interest of the audience with long digressions--set the scene and describe the problem you are addressing. Then be upfront about the contributions. Related work can help position your results and it's okay to hint a bit at the beginning and then go into more detail later on. High quality scientific communication _takes time_. Generally speaking, the machine learning research community has a lot to learn from other scientific fields. You can be part of this shift to higher quality written work by spending more time writing ideas. How do we get better at writing? **Practice** and **read** good writing. Think about conference writing as a "style". Blog posts are another style. A good anaogy is like playing music. Like a good piece of music, it needs to be reworked many times. Like any genre, on the surface it can be extremely formulaic. The introduction can be structured in the same way, each template has the same innformation etc. Try to adhere to a formula and treat it as a kind of practice in a style. [The following template is a good starting point](https://cs.stanford.edu/people/widom/paper-writing.html#intro). Well written papers tend to be better accepted in conferences (more on this in *On Reviewing*). ## Write in groups In the humanities, writing groups where people workshop research is quite common. Consider finding peers that are willing to share sections of a paper with one another solely with the intention of improving the quality of the writing. In such a setting, everyone is reading each other's work. The beauty behind this appreah is that you can see the your work as well as others, evolve over time. ## Read great writers The following sources were highlighted during our discussions: - [On Paper Writing](https://cs.stanford.edu/people/widom/paper-writing.html#intro) - Mathemeticians: - Jean-Pierre Serre - Terence Tao - Researchers with great writing: - John Kleinberg - Adam Grant - Walter Bison - Books: - Bird by Bird - Sense of Style (Pinker) - On Writing: A memoir of the craft - Computer science books: - Convex optimization by Boyd and Vandenberghe - Neural Network Learning Anthony and Bartlett - Algorithms for reinforcement learning Szepesvari (a bit terse) - Elements of Information Theory by Cover and Thomas. A well written paper is something that should be sweet to read. A reviewer faced with a mountain of mediocre papers who enjoys your paper will likely be more inclined to accept it. # On Experiments and Code A big part of research involves replicating existing experimental results, running experimental baselines and defining new experiments. Fortunately, there's a growing movement in the computer science community to publish code with research results, thereby improving reproducibility of results. Still, over time you will develop your own workflow for developing and conducting experiments. ## Your Workflow For any published work, it's not uncommon that you will be running thousands of experiments. Learn to use version control (like [git](https://git-scm.com/)) and keep your experiments and the configuration of (hyper) parameters together. You should also become familiar with a plotting library and scientific computing libraries. For example in Python, this includes [NumPy](https://numpy.org/), [PyTorch](https://www.pytorch.org), [Tensorflow](https://www.tensorflow.org/), [Jax](https://jax.readthedocs.io/en/latest/), [SciPy](https://www.scipy.org/). Plotting results (e.g. using [Matplotlib](https://matplotlib.org/stable/index.html)) or tabulating experiment runs (e.g. with [Pandas](https://pandas.pydata.org/)) will enable a tight feedback loop between defining and experiment and seeing the results. As one member of the lab put it: "there is no such thing as plotting too much"! Some other tips: - Track all your experiment parameters - Use experiment tracking systems - Use version control - Stick with free tools since they will always be available - Learn a configuration management solution (like [Hydra](https://hydra.cc/) with [joblib](https://joblib.readthedocs.io/en/latest/)), for running many experiments simultaneously - Learn how to use [SLURM](https://slurm.schedmd.com/documentation.html) for launching experiments (on the Mila Cluster or [Compute Canada](https://www.computecanada.ca/)) ## Your Tools Pick a set of well supported tools and learn to use them really well. For machine learning and deep learning in particular, this usually means becoming familiar with Python, NumPy, PyPlot and PyTorch or Tensorflow. If you are coming from a software development background, working in data science environments that incorporate a REPL (Read-Evaluate-Print Loop)where small code snippets can be evaluated quickly. These tools can replace a traditional development environment if the experiments are well contained. These platforms can provide some efficiencies since they allow you to capture your notes, LaTeX, plots and code in one place. However this comes at a trade off: these platforms tend to version code poorly, become difficult to maintain over time and don't scale beyond a few hundred lines of code. [Jupyter](https://jupyter.org/) (available on the [Mila Cluster](https://mila.docs.server.mila.quebec/cluster/mila-cluster/index.html) or the [RL Lab servers](https://www.cs.mcgill.ca/)) are preferred. If you are just starting out, pick *stable libraries* so that you can focus on writing your experiments instead of debugging somebody else's code. ### Jupyter, Google Colab and Kaggle Compared Three environments are commonly used by the lab: Jupyter, Google Colab and Kaggle. Being open source, Jupyter lab can be installed anywhere and the notebooks are interoperable with vendor-offered solutions. Google Colab is nice if you want to share code with collaborators or more broadly since the documents can be shared without requiring any sort of authentication. Autosave is convenient but explicitly tracking versions is difficult. Native access to Google Drive can also be a huge plus if you are already storing datasets or other assets in the cloud. Kaggle provides a notebook solution with support for large datasets out of the box. Their solution incorporates git, so versioning your experiments becomes a natural part of the workflow. Kaggle notebooks also raise errors before you commit code, thereby ensuring that every saved version of the notebook will run correcty. Kaggle offers longer runtimes than Google Colab (up to 8 hours as of this writing) without paying for a pro license. They also provide free TPU access. ### Don't use a REPL A handful of researchers rejected the use of any sort of notebook system, opting instead to build their own using native python objects and web based UI components on top of (Streamlit)[https://streamlit.io/]. This approach will encourage you to focus on learning a handful of tools well and staying in a development workflow. Instead of relying on a REPL, they emphasized the benefits of using the python debugger (either via the command line or using an IDE like [PyCharm](https://www.jetbrains.com/pycharm/)). ## Automating deployment and reproducibility Deploying experiments or hyperparameter sweeps and being able to reproduce month old experiments month takes a lot of time [buddy](https://github.com/ministry-of-silly-code/experiment_buddy/) integrates with pycharm, github, wandb and milla cluster to do that for you. ## Tracking Experiments Picking a framework for logging is critical since you can often have dozens or hundreds of versions of the same experiment running simultaneously. If you are working on one machine, Tensorboard can be a lightweight solution for monitoring log files in real-time which track training experiment performance (training loss, accuracy, running time, etc.). There are a number of cloud-based providers. Two which stand out are [Comet](http://comet.ml) and [Weights and Biases](http://wandb.ai/site). Some members of the lab prefer to store all their experiment data in a cloud platform and then rely on their APIs to render custom plots in a notebook. # On Authorship Mila and the RL Lab are very collaborative environments. Given our flat organizational structure, it can be very easy for ideas to be shared and collaborators to join an research effort. ## Collaborations raise questions of culture How we work together depends on the working culture of the lab and our research community. All cultures have common properties: - Culture changes over time - Culture is different between fields and subfields - Culture is different between labs and institutions More generally, you should start by learning the culture that you wish to contribute to. Then you can make meaningful improvements through your own actions. Remember that authorship should recognize people. If the delta on the amount of work they did is minor, then an acknowledgement should be fine. Another way to think about this is to ask how the contribution might be different were they not to have been involved in the first place. An author on a paper should contribute something *material* to the paper (e.g. code, experiments, drafting and editing, proofs, etc.). ## Different fields have different rules Machine Learning and Computer Science are relatively young academic fields. The approach of trying things and organically putting them together in a paper is very different from what might be done in other disciplines where experiments take months or years to run. For example, engineering and medicine treat authorship very differently, even when compared to machine learning! Deciding authorship should be based on where the research will be published. Consider the audience who will be reading the work. ## Consider Maintaining a Contribution Statement In some fields people spend time crafting a contribution statement which indicates what everyone did and who made the biggest contributions to what. In our field, this is not an established practice but it's still helpful to think about what is the **minimum** and the **maximum** contribution for each collaborator. When the team is big, e.g. with interns etc. it becomes important to keep track of all the people you spoke to so that contributions are properly accounted for. ## Authorship and your thesis Remember that a conference paper represents a unit of work which will be factored into someone's thesis. With many co-authors, this gets messy since it can become unclear who should incorporate the work in their dissertation. ## Be careful with co-first authors As the standard for a conference paper continues to rise, more people are emphasizing "co-first authors" (equal primary contributors). Be careful when you decide to do this, since it can sometimes be used as a way of avoiding difficult conversations. Remember that first authors typically have a veto on the paper, whereas other authors can elect to remove their names. Last authors can change order till the submission deadline. ## Do's and Don'ts - Do discuss who will be an author early on - Do decide the specific order of authors towards the end of the submission process - Don't submit until everyone has read and acknowledged their authorship - Don't do the minimum amount of work to get authorship - Don't have a narrow view of authorship (e.g. consider the effort in data pre-processing or other pre-requisites to the experimental efforts) ## Senior authors Rules for students and professors are different when it comes to authorship. Final authors are typically advisors and of the first author. When there are several professors, it's important to raise these questions early and let the senior authors decide. When going on an internship, your advisor may or may not be on everything you do. Clarify these details ahead of time. However, order of senior authors does not have a universally defined convention. Generally, your supervisor or the most senior authors will be the last listed on a paper. Don't worry too much about having additional authors on a paper--what matters most is the quality of the work itself. Supervisors for co-authors are not automatically going to be authors on a paper, however they will probably be. This should be dependent on the venue you are submitting to. *Always* verify that your collaborators have asked their own advisors and that expectations have been set ahead of time. # On Reviewing Part of being a researcher is peer review. This happens as a reviewer for a conference, but also when you participate in *paper swaps* before conference deadlines. ## Are you ready? You should be able to write a short summary and have a sense of the related work for a given paper. When reviewing for a conference, it's helpful to breakdown your review into the following parts: - Summary - Evaluation: - Overall comments - Strengths - Weaknesses - Comments to the Program / Area chair When you are providing comments, be specific. There is usually space to provide additional information (which the authors will not see) which can inform their decision. Remember that you are trying to make the task of deciding to accept the paper as straightforward as possible. Also, remember that a review should serve the authors. Everyone gets their papers rejected, so it's helpful to have at least a few positive and negative _actionable_ comments which can help them improve their work (e.g. do not say "the writing is bad", do say "I would write X as Y"). Even if you think the idea is bad and unredeemable, ask yourself what you would do as a co-author on the paper to improve it. ## Dos and Don'ts - Read the paper multiple times - Give yourself time between readings - Start by skimming (title, abstract, key results and conclusion) - Check the derivations - Work through the paper in multiple sessions - As you read, ask yourself: - What am I learning? - What should my peers learn from reading this? - Don't intentionally de-blind the review proces - Don't bias against "simple" results - Don't bias towards well-known authors - Don't nitpick on language and grammar - Don't focus on state-of-the art results Once you feel like you can write a summary, give yourself some time to reflect. Be patient and thoughtful so that your comments are the most effective. ## Further resources - [What Can We Do to Improve Peer Review in NLP?](https://arxiv.org/abs/2010.03863) # On Speaking Think of speaking as a one-shot thing: you don't expect people to go back and listen to something again. Avoid jargon and notation. Once you lost someone's attentions it's hard to get it back. ## Speak to your audience It's helpful to have different pitches for different people. For example, one at the level of an undergraduate student, a professor working in your subfield and your supervisor. The most important pitch should be to yourself, with each one being 15-20 seconds long. ## Tell a Story Make sure you are well prepared, but avoid using slides when you first start developing your presentation. Start with the main points you want to convey and tell a story around those points. While a narrative format is great for retaining and building information, avoid making the punchline a mystery. Early on you must share **the problem**, **the hypothesis** and **the results**. Once you have set the stage, the rest of the talk can be evidence in support of your results. As with any narrative, there should be ups and downs. Once you have the ark of your presentation, it becomes easy to develop slides. Everyone in a talk should get something. Use the onion model. It's ok to go high-level, then deep, and then back to the surface level again. You need to repeat things and use images. Avoid text. People are going to have different objectives when looking at your work, and most will want a high-level pitcture. Make sure you use an example to drive the point home. We tend to over-estimate what is known by our audience. In a more informal presentation, ask what your audience cares about and try and draw connections. One way to keep people engaged early on is by asking a question with an obvious answer at the start. Just as one should avoid putting the paper on the poster, avoid putting your paper on your slides. Avoid things like: - pseudo-code - proofs - long text - etc. ## Poster Presentations It's helpful to have a story on your poster. Avoid presenting proofs or complex algorithms unless they are part of what make your work novel. Practice sharing your work with friends, or use lab office hours. Remember that making an impact is also about communication and answering questions from a wide audience. Since people come in and out of poster presentations, it's helpful to have an *interruptible flow* where you can give someone a 30 second pitch and return to some of the more involved questions. People will come with relatively loose interest. You need to earn their attention. Do this by motivating your work early on. ## Job Talks You will inevitably have to present your work in a more formal set. If you see a friend or colleague giving a lot of presentations at prestigious institutions, there's a reason for it: they asked to! Don't be shy about asking people you know if you can share your work at their upcoming lab meeting. ## After the Talk Remember to have fun! If you are having a good time, then your audeince is more likely to enjoy themselves. Let your personality come through. You should consider the presentation a great success if you've managed to share 3 ideas across. Always make sure there's time for questions since the discussion is often very interesting. Respond to questions briefly so that many people can get an answer. You can always go into further detail one on one after the talk. Finally, remember that a talk is successful when someone decides to go read your paper after the fact. ## Some great speakers - Peter Norwig # On Internships Done one or several internships during graduate studies is becoming increasingly common. As such, it's worth talking about the whole process since it will probably factor into your graduate school experience. ## Why do you want to intern somewhere? Internships are a great way to learn what you might want to do after graduate school. For example, the internship experience can help decide which institutions would suit the work/life balance that they felt would be best for them after graduate school. A great internship experience has a number of benefits, however it can be hard to find maximal alignment between your research, your supervisor and the host organization. Therefore it's worth deciding when to look for an internship and with what attitude. ### Explore or Exploit? Several researchers describe doing internships early on in a degree as a great way to "explore" different problems and ways of tackling research problems. When you are exploring, you don't mind if the internship results in a published work, or if your working on a problem adjacent to your PhD. This can still be time well spent since you are making connections, learning about how research is done in industry and developing a broader perspective which can serve you later on in your research. It's ok to use an internship to broaden your own horizon! This can mean learning the tools, or even learning how different organizations do research. In *exploit mode*, you are prioritizing alignment between your research goals over doing any sort of internship. However in both cases, it's worthwhile focusing on a specific researcher that you wish to work with, rather than an organization. This is because your experience will be largely dictated by how well this collaboration unfolds. ## Before Applying for an Internship Getting the interview can be a bit challenging. It's very important to try and get references from people that could be potential collaborators. Be direct! Reach out to a specific research scientist that you want to work with. Focus on working with people that are doing work that you are interested in. Warm introductions through friends or colleagues can also make a huge difference. Some organizations have specific times when they accept interns, so it's helpful to be mindful of these deadlines. If you are an international student, internships can be a bit more complicated to setup. If you want to do them, you can sometimes find ways to satisfy the employer and visa requirements of your study permit. For example, some students have managed to find internships which are part time with caps on the number of hours. Go out of your way to meet people at conferences who you would like to work with. Invite them to your poster presentation or talks and don't be shy. Make sure you consult with your advisor before you start applying for internships. In some cases, you can even start a collaboration before the internship and see if you actually want to go through the legwork of applying to work with the host researcher. If you don't receive a response, don't be shy about reaching out for a follow-up! ## Preparing for the Interview Congratulations! You've passed the first "gate" in the internship process and are going to meet with people at the host organization. While different organizations focus on different kinds of evaluation processes, it's important to be well prepared. Have an academic website with your interests, a bit about you, your work and your CV. ## The Interviews Most members of the lab have gone through at least two out of three of the following interviews for industry research roles. Try and be positive and keep in mind that your interviewer wants you to succeed. Remember that they are looking for someone who will be a great new collaborator on their team. ### Coding Interviews These interviews tend to follow a software engineering interview format. The focus will be on logical flow, writing pseudo code in a whiteboard / google doc or using a web-based development environment. Practicing on platforms like [HackerRank](https://www.hackerrank.com/) and [LeetCode](https://leetcode.com/) is highly recommended. It's also worth having friends play the role of the interviewer in time-boxed programming interviews. This sort of environment, while artificial, is the predominant way that technical skill is evaluated, and therefore requires explicit practice. Focus on structuring the problem, thinking out loud. Be prepared to drive the development process from start to finish. Also, when whiteboarding an algorithm, focus on the logic and try not to get too flustered if you forget the exact syntax for a specific Python library. ### ML/Math Interviews ML Interviews tend to be more quizy. In this case, review material from any applied machine learning courses you've already done. Recruiters will usually give you a list of topics to study, which might include topics such as probability theory, calculus, statistics, machine learning, etc. ### Research Interviews The research interview tends to be the most informal and centered around your particular research interests. Remember that your interviewer is trying to: - evaluate your soundness of vision, - learn how you go about solving a problem, - understand how you work and what makes you unique. Prepare a 30 second pitch for all the papers you've been a part of. Make sure you are familiar with all the details of the paper, even if you were not the one involved. For example, if you were primarily concerned with the experiments, be prepared to discuss the theory (or vice versa). You should also expect more theoretical, or open ended questions. ## The Internship Internships tend to go by very quickly, so it's helpful to acquaint yourself with any tools required for running large experiments (e.g. SLURM) ahead of time. Some internships do not result in publication, and that can be fine (e.g. being in explore mode). You should measure your own success in terms of the quality of the collaboration between you and your host researcher. Once you have the internship, don't stop networking! Being in a new organization is a great opportunity to learn about what other people are working on and make new connections in the future. While there should be no expectation that you will continue to work on the project after the internship, it often ends up being the case that there will be work leftover if you wish to turn your work into a publication. ## Beyond the internship A successful internship does not need to enter into your thesis. By having a great experience you also open up new doors down the line that could never be expected. For example, one professor mentioned that mentors from an internship during graduate school resulted in fellowships and other meaningful connections many years layer. ## More references The following accounts were written by members of the lab in prepartion for this topic: - [Digging my Memory](https://docs.google.com/document/d/1iSQDSdkysMznicWaHq0dIyduzg3_1myC0ilImbWLpF0/edit) - [Research Internships](https://docs.google.com/document/d/1ZKkOcq0JOW07SRf-vMJLjaKr7GtzAq9eltmXsKxRpLw/edit#heading=h.j4kgd8yzro0y) # On Failing _You will fail_, probably most of the time, but those failures will make you learn and become a better researcher. Your code will have bugs; this will make you a better coder. Your math will have errors; this will make you a better mathematician. Your ideas won't work out; this will make you better at discriminating good from bad ideas. Failing constantly is _not a normal activity for the brain!_ Keep this in mind as you explore the unknown unknown. Some people are very confortable in that regime, some are not. Sometimes we have highs where we are motivated by iterating over and over, sometimes we get stuck in a rut and nothing seems right. Research is different for everyone, and how each relates to it is a deeply personal experience. Being mindful of our mental health and reaching out are good practices. ## Identifying dead ends When should you stop working on something? - Do stop when you feel like no progress has been made after a significant effort - Do try to understand **why** you're stopping (e.g. try to come up with a toy problem where your algorithm fails), this might actually lead you to not stop and find something new - Do give yourself a time limit beforehand (depending on the idea's maturity, 2 hours, 2 days, 2 weeks), both as a lower bound (e.g. 2 more days to understand the failure) and an upper bound (e.g. 2 more days and then I'll stop obsessing on this idea) - Be wary of the [sunk cost fallacy](https://en.wikipedia.org/wiki/Sunk_cost#Fallacy_effect) - Do reach out to others (in particular your supervisor(s)) if you've already invested time - Do reach out to others (in particular your supervisor(s)) to learn if your "stopping" boundary is too low or too high - Don't feel bad for moving on - Don't give up too quickly, but do give up at some point