5 Minute Binary Options Strategy

All schools should have uniforms.

There's been this whole debate on dress code and double-standards for girls, all of which I completely agree with. If someone is sexually attracted to some chick's shoulder they're a creep, and schools shouldn't be making it a girl's responsibility to control a guy's reaction to an outfit.
But honestly, as a high schooler, I think we should all be wearing uniforms. First of all, there is no pressure to dress a certain way, and the divide between rich and poor becomes more narrow (obviously, yes the rich kid will come in a sports car and the poor kid on a bike or something, but there are less visual indicators of wealth).
In addition to this, school is like work, it's supposed to be a professional learning environment, yet now it's literally all about how cool you look, and on top of that, wearing a nike hoodie and stained sweatpants doesn't really convey much professionalism. Plus, choosing a nice outfit can take a lot of time, and a uniform can easily cut out like 5-10 mins of deciding on what to wear.
We see this example now, we're all in lockdown, and I think a lot of us can relate to feeling a whole lot less motivated when all we're doing is sitting around in our pajamas.
Another way I see uniforms being helpful is cost effectiveness. Schools can reduce costs for poorer students, but seriously, how many uniforms do you need to buy for a school year?
Let's say you have 5 shirts, 5 pants/skirts/shorts, 2 jackets, 2 pairs of shoes, and an extra $30 in expenses, just for good measures.
That would be:
5(30) + 5(35) + 2(60) + 2(70)+ 30 = 615
$615 really isn't that much for something you wear every single day for a whole school year. Assuming there are 180 days in a school year, that's $3.41 per day for a whole outfit, which is significantly less cost per use than owning lots of shirts, pants, hoodies, shoes, etc, which would be needed to make complete outfits.
Also people could choose what they want in the school's uniform options, ex. a gender non-binary student or just a dude who likes some air in between his legs can wear skirts. And people shouldn't be punished a lot if they break the dress code or wear something else, the school should just tell them not to do it and move on.
I get freedom of speech and stuff, but I think letting kids express themselves with other things (hairstyles, nail polish, jewelry, backpacks, coats), can actually be fun. I think if we reframe the whole uniform argument from "we want you all to fall in line exactly the way we want" to "this is just a way for us to reduce decision fatigue and help create a more level social playing field," people would be more on board.
Edit: This is unpopularopinion. The reason why the recommendations on this community are things 99% of the population agrees on is because everyone upvotes what they agree with. I get you disagree, that's why this is unpopular. Downvoting if you disagree or upvoting because you agree kinda ruins the purpose of the community.
Edit 2: I know that $615 is a lot for school uniforms, I'm saying it's not a lot if you're purchasing a lot of outfits (see my calculation). People could probably get away with less tops, skirts, etc. Also, as I said above, schools could cover the cost of uniforms for the kids who need that assistance. There's no way on earth I'd expect a family living paycheck to paycheck to suddenly pull out $615.
submitted by PriceApprehensive22 to unpopularopinion [link] [comments]

[Table] Asteroid Day AMA – We’re engineers and scientists working on a mission that could, one day, help save humankind from asteroid extinction. Ask us anything!

Source
There are several people answering: Paolo Martino is PM, Marco Micheli is MM, Heli Greus is HG, Detlef Koschny is DVK, and Aidan Cowley is AC.
Questions Answers
Can we really detect any asteroids in space with accuracy and do we have any real means of destroying it? Yes, we can detect new asteroids when they are still in space. Every night dozens of new asteroids are found, including a few that can come close to the Earth.
Regarding the second part of the question, the goal would be to deflect them more than destroy them, and it is technologically possible. The Hera/DART mission currently being developed by ESA and NASA will demonstrate exactly this capability.
MM
I always wanted to ask: what is worse for life on Earth - to be hit by a single coalesced asteroid chunk, or to be hit by a multiple smaller pieces of exploded asteroid, aka disrupted rubble pile scenario? DVK: This is difficult to answer. If the rubble is small (centimetres to meters) it is better to have lots of small ones – they’d create nice bright meteors. If the rubble pieces are tens of meters it doesn’t help.
Let’s say that hypothetically, an asteroid the size of Rhode Island is coming at us, it will be a direct hit - you’ve had the resources and funding you need, your plan is fully in place, everything you’ve wanted you got. The asteroid will hit in 10 years, what do you do? DVK: I had to look up how big Rhode Island is – a bit larger than the German Bundesland ‘Saarland’. Ok – this would correspond to an object about 60 km in diameter, right? That’s quite big – we would need a lot of rocket launches, this would be extremely difficult. I would pray. The good news is that we are quite convinced that we know all objects larger than just a few kilometers which come close to our planet. None of them is on a collision course, so we are safe.
the below is a reply to the above
Why are you quite convinced that you know all object of that size? And what is your approach in finding new celestial bodies? DVK: There was a scientific study done over a few years (published in Icarus 2018, search for Granvik) where they modelled how many objects there are out there. They compared this to the observations we have with the telescopic surveys. This gives us the expected numbers shown here on our infographic: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
There are additional studies to estimate the ‘completeness’ – and we think that we know everything above roughly a few km in size.
To find new objects, we use survey telescopes that scan the night sky every night. The two major ones are Catalina and Pan-STARRS, funded by NASA. ESA is developing the so-called Flyeye telescope to add to this effort https://www.esa.int/ESA_Multimedia/Images/2017/02/Flyeye_telescope.
the below is a reply to the above
Thanks for the answer, that's really interesting! It's also funny that the fist Flyeye deployed is in Sicily, at less than 100km from me, I really had no idea DVK: Indeed, that's cool. Maybe you can go and visit it one day.
the below is a reply to the original answer
What about Interstellar objects however, like Oumuamua? DVK: The two that we have seen - 'Oumuamua and comet Borisov - were much smaller than the Saarland (or Rhode Island ;-) - not sure about Borisov, but 'Oumuamua was a few hundred meters in size. So while they could indeed come as a complete surprise, they are so rare that I wouldn't worry.
Would the public be informed if an impending asteroid event were to happen? And, how would the extinction play out? Bunch of people crushed to death, knocked off our orbit, dust clouds forever? DVK: We do not keep things secret – all our info is at the web page http://neo.ssa.esa.int. The ‘risky’ objects are in the ‘risk page’. We also put info on really close approaches there. It would also be very difficult to keep things ‘under cover’ – there are many high-quality amateur astronomers out there that would notice.
In 2029 asteroid Apophis will fly really close to Earth, even closer than geostationary satellites. Can we use some of those satellites to observe the asteroid? Is it possible to launch very cheap cube sats to flyby Apophis in 2029? DVK: Yes an Apophis mission during the flyby in 2029 would be really nice. We even had a special session on that topic at the last Planetary Defense Conference in 2019, and indeed CubeSats were mentioned. This would be a nice university project – get me a close-up of the asteroid with the Earth in the background!
the below is a reply to the above
So you’re saying it was discussed and shelved? In the conference we just presented ideas. To make them happen needs funding - in the case of ESA the support of our member countries. But having something presented at a conference is the first step. One of the results of the conference was a statement to space agencies to consider embarking on such a mission. See here: https://www.cosmos.esa.int/documents/336356/336472/PDC_2019_Summary_Report_FINAL_FINAL.pdf/341b9451-0ce8-f338-5d68-714a0aada29b?t=1569333739470
Go to the section 'resolutions'. This is now a statement that scientists can use to present to their funding agencies, demonstrating that it's not just their own idea.
Thanks for doing this AMA! Did we know the Chelyabinsk meteor in 2013 (the one which had some great videos on social media) was coming? Ig not, how comes? Also, as a little side one, have there been any fatalities from impact events in the past 20 years? Unfortunately, the Chelyabinsk object was not seen in advance, because it came from the direction of the Sun where ground-based telescopes cannot look.
No known fatalities from impacts have happened in the past 20 years, although the Chelyabinsk event did cause many injuries, fortunately mostly minor.
MM
the below is a reply to the above
How often do impacts from that direction happen, compared to impacts from visible trajectories? In terms of fraction of the sky, the area that cannot be easily scanned from the ground is roughly a circle with a radius of 40°-50° around the current position of the Sun, corresponding to ~15% of the total sky. However, there is a slight enhancement of objects coming from that direction, therefore the fraction of objects that may be missed when heading towards us is a bit higher.
However, this applies only when detecting an asteroid in its "final plunge" towards the Earth. Larger asteroids can be spotted many orbits earlier, when they are farther away and visible in the night side of the sky. Their orbits can then be determined and their possible impacts predicted even years or decades in advance.
MM
There must be a trade-off when targeting asteroids as they get closer to Earth, is there a rule of thumb at what the best time is to reach them, in terms of launch time versus time to reach the asteroid and then distance from Earth? DVK: Take e.g. a ‘kinetic impactor’ mission, like what DART and Hera are testing. Since we only change the velocity of the asteroid slightly, we need to hit the object early enough so that the object has time to move away from it’s collision course. Finding out when it is possible to launch requires simulations done by our mission analysis team. They take the strength of the launcher into account, also the available fuel for course corrections, and other things. Normally each asteroid has its own best scenario.
Do you also look at protecting the moon from asteroids? Would an impact of a large enough scale potentially have major impacts on the earth? DVK: There are programmes that monitor the Moon and look for flashes from impacting small asteroids (or meteoroids) - https://neliota.astro.noa.g or the Spanish MIDAS project. We use the data to improve our knowledge about these objects. These programmes just look at what is happening now.
For now we would not do anything if we predicted a lunar impact. I guess this will change once we have a lunar base in place.
Why aren't there an international organisation comprised of countries focused on the asteroid defence? Imagine like the organisation with multi-billion $ budget and program of action on funding new telescopes, asteroid exploration mission, plans for detection of potentially dangerous NEA, protocols on action after the detection - all international, with heads of states discussing these problems? DVK: There are international entities in place, mandated by the UN: The International Asteroid Warning Network (http://www.iawn.net) and the Space Mission Planning Advisory Group (http://www.smpag.net). These groups advise the United Nations. That is exactly where we come up with plans and protocols on action. But: They don’t have budget – that needs to come from elsewhere. I am expecting that if we have a real threat, we would get the budget. Right now, we don’t have a multi-billion budget.
the below is a reply to someone else's answer
There is no actual risk of any sizable asteroids hitting earth in the foreseeable future. Any preparation for it would just be a waste of money. DVK: Indeed, as mentioned earlier, we do not expect a large object to hit is in the near future. We are mainly worried about those in the size range of 20 m to 40 m, which happen on average every few tens of years to hundreds of years. And where we only know a percent of them or even less.
President Obama wanted to send a crewed spacecraft to an asteroid - in your opinion is this something that should still be done in the future, would there be any usefulness in having a human being walk/float on an asteroid's surface? DVK: It would definitely be cool. I would maybe even volunteer to go. Our current missions to asteroids are all robotic, the main reason is that it is much cheaper (but still expensive) to get the same science. But humans will expand further into space, I am sure. If we want to test human exploration activities, doing this at an asteroid would be easier than landing on a planet.
this is another reply Yes, but I am slightly biased by the fact that I work at the European astronaut centre ;) There exist many similarities to what we currently do for EVA (extra vehicular activities) operations on the International Space Station versus how we would 'float' around an asteroid. Slightly biased again, but using such a mission to test exploration technologies would definitely still have value. Thanks Obama! - AC
I've heard that some asteroids contains large amounts of iron. Is there a possibility that we might have "space mines" in the far away future, if our own supply if iron runs out? Yes, this is a topic in the field known as space mining, part of what we call Space Resources. In fact, learning how we can process material we might find on asteroids or other planetary bodies is increasingly important, as it opens up the opportunities for sustainable exploration and commercialization. Its a technology we need to master, and asteroids can be a great target for testing how we can create space mines :) - AC
By how much is DART expected to deflect Didymos? Do we have any indication of the largest size of an asteroid we could potentially deflect? PM: Didymos is a binary asteroid, consisting of a main asteroid Didymos A (~700m) and a smaller asteroid Didymos B (~150m) orbiting around A with a ~12 hours period. DART is expected to impact Didymos B and change its orbital period w.r.t. Didymos A of ~1%. (8 mins)
The size of Didymos B is the most representative of a potential threat to Earth (the highest combination of probability and consequence of impacts), meaning smaller asteroids hit the Earth more often but have less severe consequences, larger asteroids can have catastrophic consequences but their probability of hitting the earth is very very low.
the below is a reply to the above
Why is there less probability of larger asteroids hitting earth? DVK: There are less large objects out there. The smaller they are, the more there are.
the below is a reply to the original answer
Is there any chance that your experiment will backfire and send the asteroid towards earth? PM: Not at all, or we would not do that :) Actually Dimorphos (the Didymos "moon") will not even leave its orbit around Didymos. It will just slightly change its speed.
I'm sure you've been asked this many times but how realistic is the plot of Armageddon? How likely is it that our fate as a species will rely on (either) Bruce Willis / deep sea oil drillers? Taking into consideration that Bruce Willis is now 65 and by the time HERA is launched he will be 69, I do not think that we can rely on him this time (although I liked the movie).
HERA will investigate what method we could use to deflect asteroid and maybe the results will show that we indeed need to call the deep sea oil drillers.
HG
the below is a reply to the above
So then would it be easier to train oil drillers to become astronauts, or to train astronauts to be oil drillers? I do not know which one would be easier since I have no training/experience of deep see oil drilling nor becoming an astronaut, but as long as the ones that would go to asteroid have the sufficient skills and training (even Bruce Willis), I would be happy.
HG
If budget was no object, which asteroid would you most like to send a mission to? Nice question! For me, I'd be looking at an asteroid we know something about, since I would be interested in using it for testing how we could extract resources from it. So for me, I would choose Itokawa (https://en.wikipedia.org/wiki/25143_Itokawa), which was visited by Hayabusa spacecraft. So we already have some solid prospecting carried out for this 'roid! - AC
this is another reply Not sure if it counts as an asteroid, but Detlef and myself would probably choose ʻOumuamua, the first discovered interstellar object.
MM
the below is a reply to the above
Do we even have the capability to catch up to something like that screaming through our solar system? That thing has to have a heck of a velocity to just barrel almost straight through like that. DVK: Correct, that would be a real challenge. We are preparing for a mission called 'Comet Interceptor' that is meant to fly to an interstellar object or at least a fresh comet - but it will not catch up with it, it will only perform a short flyby.
https://www.esa.int/Science_Exploration/Space_Science/ESA_s_new_mission_to_intercept_a_comet
After proving to be able to land on one, could an asteroid serve as a viable means to transport goods and or humans throughout the solar system when the orbit of said asteroid proves beneficial. While it is probably quite problematic to land the payload, it could save fuel or am I mistaken? Neat idea! Wonder if anyone has done the maths on the amount of fuel you would need/save vs certain targets. - AC
PM: To further complement, the saving is quite marginal indeed because in order to land (softly) on the asteroid you actually need to get into the very same orbit of that asteroid . At that point your orbit remains the same whether you are on the asteroid or not..
can the current anti-ballistic missiles systems intercept a terminal phase earth strike asteroid? or it is better to know beforehand and launch an impacting vehicle into space? DVK: While I do see presentations on nuclear explosions to deflect asteroids at our professional meetings, I have not seen anybody yet studying how we could use existing missile systems. So it's hard to judge whether existing missiles would do the job. But in general, it is better to know as early as possible about a possible impact and deflect it as early as possible. This will minimize the needed effort.
How much are we prepared against asteroid impacts at this moment? DVK: 42… :-) Seriously – I am not sure how to quantify ‘preparedness’. We have international working groups in place, mentioned earlier (search for IAWN, SMPAG). We have a Planetary Defence Office at ESA, a Planetary Defense Office at NASA (who spots the difference?), search the sky for asteroids, build space missions… Still we could be doing more. More telescopes to find the object, a space-based telescope to discover those that come from the direction of the Sun. Different test missions would be useful, … So there is always more we could do.
Have you got any data on the NEO coverage? Is there estimations on the percentage of NEOs we have detected and are tracking? How can we improve the coverage? How many times have asteroids been able to enter earths atmosphere without being detected beforehand? Here’s our recently updated infographics with the fraction of undiscovered NEOs for each size range: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
As expected, we are now nearly complete for the large ones, while many of the smaller ones are still unknown.
In order to improve coverage, we need both to continue the current approach, centered on ground-based telescopes, and probably also launch dedicated telescopes to space, to look at the fraction of the sky that cannot be easily observed from the ground (e.g., towards the Sun).
Regarding the last part of your question, small asteroids enter the Earth atmosphere very often (the infographics above gives you some numbers), while larger ones are much rarer.
In the recent past, the largest one to enter our atmosphere was about 20 meters in diameter, and it caused the Chelyabinsk event in 2013. It could not be detected in advance because it came from the direction of the Sun.
We have however detected a few small ones before impact. The first happened in 2008, when a ~4-meter asteroid was found to be on a collision course less than a day before impact, it was predicted to fall in Northern Sudan, and then actually observed falling precisely where (and when) expected.
MM
this is another reply >After
DVK: And to add what MM said - Check out http://neo.ssa.esa.int. There is a ‘discovery statistics’ section which provides some of the info you asked about. NASA is providing similar information here https://cneos.jpl.nasa.gov/stats/. To see the sky which is currently covered by the survey telescopes, you need to service of the Minor Planet Center which we all work together with: http://www.minorplanetcenter.org, ‘observers’, ‘sky coverage’. That is a tool we use to plan where we look with our telescopes, so it is a more technical page.
Are there any automatic systems for checking large numbers of asteroids orbits, to see if the asteroid's orbit is coming dangerously close to Earth, or is it done by people individually for every asteroid? I ask it because LSST Rubin is coming online soon and you know it will discover a lot of new asteroids. Yes, such systems exist, and monitor all known and newly discovered asteroids in order to predict possible future impacts.
The end result of the process is what we call "risk list": http://neo.ssa.esa.int/risk-page
It is automatically updated every day once new observational data is processed.
MM
What are your favourite sci-fi series? DVK: My favorites are ‘The Expanse’, I also liked watching ‘Salvation’. For the first one I even got my family to give me a new subscription to a known internet streaming service so that I can see the latest episodes. I also loved ‘The Jetsons’ and ‘The Flintstones’ as a kid. Not sure the last one counts as sci-fi though. My long-time favorite was ‘Dark Star’.
this is another reply Big fan of The Expanse at the moment. Nice, hard sci-fi that has a good impression of being grounded in reality - AC
this is another reply When I was a kid I liked The Jetsons, when growing up Star Trek, Star wars and I also used to watch with my sister the 'V'.
HG
When determining the potential threat of a NEA, is the mass of an object a bigger factor or size? I'm asking because I'm curious if a small but massive object (say, with the density of Psyche) could survive atmospheric entry better than a comparatively larger but less massive object. The mass is indeed what really matters, since it’s directly related with the impact energy.
And as you said composition also matters, a metal object would survive atmospheric entry better, not just because it’s heavier, but also because of its internal strength.
MM
What are your thoughts on asteroid mining as portrayed in sci-fi movies? Is it feasible? If so would governments or private space programs be the first to do so?What type of minerals can be found on asteroids that would merit the costs of extraction? Certainly there is valuable stuff you can find on asteroids. For example, the likely easiest material you can harvest from an asteroid would be volatiles such as H2O. Then you have industrial metals, things like Iron, Nickel, and Platinum group metals. Going further, you can break apart many of the oxide minerals you would find to get oxygen (getting you closer to producing rocket fuel in-situ!). Its feasible, but still needs alot of testing both here on Earth and eventually needs to be tested on a target. It may be that governments, via agencies like ESA or NASA, may do it first, to prove the principles somewhat, but I know many commercial entities are also aggresively working towards space mining. To show you that its definitely possible, I'd like to plug the work of colleagues who have processed lunar regolith (which is similar to what you may find on asteroids) to extract both oxygen and metals. Check it out here: http://www.esa.int/ESA_Multimedia/Images/2019/10/Oxygen_and_metal_from_lunar_regolith
AC
Will 2020's climax be a really big rock? DVK: Let's hope not...
Considering NASA, ESA, IAU etc. is working hard to track Earth-grazing asteroids, how come the Chelyabinsk object that airburst over Russia in 2013 came as a total surprise? The Chelyabinsk object came from the direction of the Sun, where unfortunately ground-based telescopes cannot look at. Therefore, it would not have been possible to discover it in advance with current telescopes. Dedicated space telescopes are needed to detect objects coming from this direction in advance.
MM
the below is a reply to the above
Is this to say that it was within specific solid angles for the entire time that we could have observed it given its size and speed? Yes, precisely that. We got unlucky in this case.
MM
Have any of you read Lucifer's Hammer by Larry Niven? In your opinion, how realistic is his depiction of an asteroid strike on Earth? DVK: I have – but really long ago, so I don’t remember the details. But I do remember that I really liked the book, and I remember I always wanted to have a Hot Fudge Sundae when reading it.
I was thinking about the asteroid threat as a teen and came up with this ideas (Hint: they are not equally serious, the level of craziness goes up real quick). Could you please comment on their feasibility? 1. Attaching a rocket engine to an asteroid to make it gradually change trajectory, do that long in advance and it will miss Earth by thousands of km 2. Transporting acid onto asteroid (which are mainly metal), attaching a dome-shaped reaction chamber to it, using heat and pressure to then carry out the chemical reaction to disintegrate asteroids 3. This one is even more terrible than a previous one and totally Dan Brown inspired — transporting antimatter on asteroid, impacting and causing annihilation. Thank you for this AMA and your time! DVK: Well the first one is not so crazy, I have seen it presented... the difficulty is that all asteroids are rotating in one way or another. So if you continuously fire the engine it would not really help. You'd need to switch the engine on and off. Very complex. And landing on an asteroid is challenging too. Just using the 'kinetic impactor' which we will test with DART/Hera (described elsewhere in this chat) is simpler. Another seriously proposed concept is to put a spacecraft next to an asteroid and use an ion engine (like we have on our Mercury mission BepiColombo) to 'push' the asteroid away.
As for 2 and 3 I think I will not live to see that happening ;-)
What is the process to determine the orbit of a newly discovered asteroid? The process is mathematically quite complex, but here's a short summary.
Everything starts with observations, in particular with measurements of the position of an asteroid in the sky, what we call "astrometry". Discovery telescopes extract this information from their discovery images, and make it available to everybody.
These datapoints are then used to calculate possible trajectories ("orbits") that pass through them. At first, with very few points, many orbits will be possible.
Using these orbits we can extrapolate where the asteroid will be located during the following nights, use a telescope to observe that part of the sky, and locate the object again.
From these new observations we can extract new "astrometry", add it to the orbit determination, and see that now only some of the possible orbits will be compatible with the new data. As a result, we now know the trajectory better than before, because a few of the possible orbits are not confirmed by the new data.
The cycle can then continue, with new predictions, new observations, and a more accurate determination of the object's orbit, until it can be determined with an extremely high level of accuracy.
MM
What are some asteroids that are on your "watchlist"? We have exactly that list on our web portal: http://neo.ssa.esa.int/risk-page
It's called "risk list", and it includes all known asteroids for which we cannot exclude a possible impact over the next century. It is updated every day to include newly discovered asteroids, and remove those that have been excluded as possible impactors thanks to new observations.
MM
the below is a reply to the above
That's quite a list!! Do you guys ever feel stressed or afraid when you have to add another dangerous candidate (and by dangerous I mean those above 200m) is added to this Risk List? Yes, when new dangerous ones are added it's important that we immediately do our best to gather more data on them, observing them with telescopes in order to get the information we need to improve our knowledge of their orbit.
And then the satisfaction of getting the data needed to remove one from the list is even greater!
MM
What inspired you to go into this field of study? I was fascinated by astronomy in general since I was a kid, but the actual "trigger" that sparked my interest in NEOs was a wonderful summer course on asteroids organized by a local amateur astronomers association. I immediately decided that I would do my best to turn this passion into my job, and I'm so happy to have been able to make that dream come true.
MM
this is another reply DVK: I started observing meteors when I was 14, just by going outside and looking at the night sky. Since then, small bodies in the solar system were always my passion.
As a layperson, I still think using nuclear weapons against asteroids is the coolest method despite better methods generally being available. Do you still consider the nuclear option the cool option, or has your expertise in the field combined with the real-life impracticalities made it into a laughable/silly/cliche option? DVK: We indeed still study the nuclear option. There are legal aspects though, the ‘outer space treaty’ forbids nuclear explosions in space. But for a large object or one we discover very late it could be useful. That’s why we have to focus on discovering all the objects out there as early as possible – then we have time enough to use more conventional deflection methods, like the kinetic impactor (the DART/Hera scenario).
It seems like doing this well would require international cooperation, particularly with Russia. Have you ever reached out to Russia in your work? Do you have a counterpart organization there that has a similar mission? DVK: Indeed international cooperation is important - asteroids don't know about our borders! We work with a Russian team to perform follow-up observations of recently discovered NEOs. Russia is also involved in the UN-endorsed working groups that we have, IAWN and SMPAG (explained in another answer).
how much can experts tell from a video of a fireball or meteor? Can you work out what it's made of and where it came from? https://www.reddit.com/space/comments/hdf3xe/footage_of_a_meteor_at_barrow_island_australia/?utm_source=share&utm_medium=web2x If multiple videos or pictures, taken from different locations, are available, then it's possible to reconstruct the trajectory, and extrapolate where the object came from.
Regarding the composition, it's a bit more difficult if nothing survives to the ground, but some information can be obtained indirectly from the fireball's color, or its fragmentation behavior. If a spectral analysis of the light can be made, it's then possible to infer the chemical composition in much greater detail.
MM
I've always wanted to know what the best meteorite buying site is and what their average price is?? DVK: Serious dealers will be registered with the 'International Meteorite Collectors Association (IMCA)' - https://www.imca.cc/. They should provide a 'certificate of authenticity' where it says that they are member there. If you are in doubt, you can contact the association and check. Normally there are rough prices for different meteorite types per gram. Rare meteorites will of course be much more expensive than more common ones. Check the IMCA web page to find a dealer close to you.
Just read through Aidans link to the basaltic rock being used as a printing material for lunar habitation. There is a company called Roxul that does stone woven insulation that may be able to shed some light on the research they have done to minimize their similarity to asbestos as potentially carcinogenic materials deemed safe for use in commercial and residential applications. As the interior surfaces will essentially be 3D printed lunar regolith what are the current plans to coat or dampen the affinity for the structure to essentially be death traps for respiratory illness? At least initially, many of these 3d printed regolith structures would not be facing into pressurised sections, but would rather be elements placed outside and around our pressure vessels. Such structures would be things like radiation shields, landing pads or roadways, etc. In the future, if we move towards forming hermetically sealed structures, then your point is a good one. Looking into terrestrial solutions to this problem would be a great start! - AC
What kind of career path does it take to work in the asteroid hunting field? It's probably different for each of us, but here's a short summary of my own path.
I became interested in asteroids, and near-Earth objects in particular, thanks to a wonderful summer course organized by a local amateur astronomers association. Amateur astronomers play a great role in introducing people, and young kids in particular, to these topics.
Then I took physics as my undergrad degree (in Italy), followed by a Ph.D. in astronomy in the US (Hawaii in particular, a great place for astronomers thanks to the exceptional telescopes hosted there).
After finishing the Ph.D. I started my current job at ESA's NEO Coordination Centre, which allowed me to realize my dream of working in this field.
MM
this is another reply DVK: Almost all of us have a Master's degree either in aerospace engineering, mathematics, physics/astronomy/planetary science, or computer science. Some of us - as MM - have a Ph.D. too. But that's not really a requirement. This is true for our team at ESA, but also for other teams in other countries.
What is the likelihood of an asteroid hitting the Earth In the next 200 years? It depends on the size, large ones are rare, while small ones are much more common. You can check this infographics to get the numbers for each size class: https://www.esa.int/ESA_Multimedia/Images/2018/06/Asteroid_danger_explained
MM
Have you played the Earth Defence Force games and if you have, which one is your favourite? No I have not played the Earth Defence Force games, but I just looked it up and I think I would liked it. Which one would you recommend?
HG
How close is too close to earth? Space is a SUPER vast void so is 1,000,000 miles close, 10,000,000? And if an asteroid is big enough can it throw earth off its orbit? DVK: Too close for my taste is when we compute an impact probability > 0 for the object. That means the flyby distance is zero :-) Those are the objects on our risk page http://neo.ssa.esa.int/risk-page.
If an object can alter the orbit of another one, we would call it planet. So unless we have a rogue planet coming from another solar system (verrry unlikely) we are safe from that.
How can I join you when I'm older? DVK: Somebody was asking about our career paths... Study aerospace engineering or math or physics or computer science, get a Masters. Possibly a Ph.D. Then apply for my position when I retire. Check here for how to apply at ESA: https://www.esa.int/About_Us/Careers_at_ESA/Frequently_asked_questions2#HR1
How much is too much? DVK: 42 again
Are you aware of any asteroids that are theoretically within our reach, or will be within our reach at some point, that are carrying a large quantity of shungite? If you're not aware, shungite is like a 2 billion year old like, rock stone that protects against frequencies and unwanted frequencies that may be traveling in the air. I bought a whole bunch of the stuff. Put them around the la casa. Little pyramids, stuff like that. DVK: If I remember my geology properly, Shungite forms in water sedimental deposits. This requires liquid water, i.e. a larger planet. So I don't think there is a high chance to see that on asteroids.
submitted by 500scnds to tabled [link] [comments]

[Results] Do you like ice-cream? (all about names)

Now, for anyone who didn't take part in this survey, let me briefly explain. This survey wasn't actually about ice-cream. What I set out to investigate was how willing people are to give out their own name online on a survey. To do this I put together a seemingly innocent survey purporting to be about ice-cream. Respondents were first asked for some demographic information, with a question asking for their name included, before being asked the "actual" question of Do you like ice-cream? on the next page. On the final page the ruse was revealed and respondents were asked how they responded to the name question: with their real name, another name, or something that's not a name at all, i.e. a non-name.
I got 915 responses from over 60 countries, which I was very pleased with. So thank you very much to everyone who took part!
Although the ice-cream question wasn't actually the main focus of the survey, I've put together the results of that question too for anyone who's interested, which I'll post in a comment below.
Without further ado, the results:
Just over half of respondents gave their real name. Interestingly, for the first few hours of the survey, the three options were consistently roughly even, at about a third each (and so the majority were not giving their real name). It was quite a bit later on that the real name preference started to show. This is probably a reflection of the different demographics of people on SampleSize at different times of day.
There isn't much difference across gender, with a similar percentage giving their real name for all three options. However, non-binary people were the most likely to give a non-name and the least likely to give a fake name while women were at the opposite end, being the least likely to give a non-name and the most likely to give a fake name, with men in the middle.
There is a very clear trend of younger people being more likely to give their own name and older people being less likely to. It's worth noting that a suspiciously high number of people in their 60s were 69, which may explain the high proportion of 'joke' non-name responses.
Canada and Australia had a notably lower proportion giving their real name than other countries. The Netherlands had the smallest proportion to give a fake name.
And just for fun:
It seems people who don't like ice-cream are less willing to give their real name but quite unlikely to give a fake name, much preferring a non-name. This could indicate some of these are joke responses.
A couple of things to bear in mind:
Last points:
Most people who gave their real name gave only their first name. A small minority gave their full name.
About three quarters of 'another name' responses seem feasible while a quarter did not (e.g. names like Ben Dover or Jennifer Lopez).
Some favourite responses to What is your name?:
Edit: Link to original thread.
submitted by tg3y to SampleSize [link] [comments]

Survey about online dating

Are you over 18, out as non-binary and have tried online dating? Then please take my 15 min research survey.
My name is Kyle (they/them) and I’m a non-binary undergraduate student at Bowdoin college. I’m doing research on the experiences of out trans and non-binary people while online dating and I would love to have your input!
If you are interested in sharing your thoughts with me and want to be entered to win a $25 gift card please take my survey at: https://bowdoincollege.qualtrics.com/jfe/form/SV_d0huHiLyk5MquR7
This survey will also ask you if you are interested in doing a follow up interview which is totally optional.
If you really don't like surveys, but would be interested in just a 1 hour video interview about your feelings on online dating, go here instead https://bowdoincollege.qualtrics.com/jfe/form/SV_cUy7rm2KxHEOs6N Those who I interview will be given a $10 gift card as a token of my appreciation.
If you have any questions, please feel free to send a dm!

Informed Consent Information
I am asking you to participate in a research study titled “Exploratory Research on the Experiences of Transgender and Non-Binary People Using Online Dating Services”. This study is being led by Kyle Putnam (They/Them), a student in the Sociology department at Bowdoin College. The Faculty Advisor for this study is Professor Theo Greene (He/Him) of the Sociology department at Bowdoin College.
What the study is about
The purpose of this research survey is to better understand the specific experiences of out transgender and non-binary people have while online dating. This research will ask you about what online dating platforms you have used, what experiences you have had while using them (both positive and negative), and what information about your gender you choose to share while online dating.
What we will ask you to do
In this online survey you will be asked a series of questions about the topics listed above as well as information about your gender, age and sexuality. This survey should take you approximately 15 minutes to complete. At the end of the survey you will be directed to a separate survey which will ask for your email if you would like to be contacted about participating in a follow up interview on this research topic. More information about this interview will be provided if you indicate you are interested.
Privacy/Confidentiality/Data Security
To protect your privacy, data from this survey will contain no identifying features. If you choose to enter your email address for a follow up interview, your email will be stored separately from your survey data. Survey data and email addresses will be downloaded and stored on a private, password protected computer.
If you have questions
The main researcher conducting this study is Kyle Putnam (They/Them) an undergraduate student at Bowdoin College. If you have questions about this survey before, during, or after you take it, you may contact Kyle Putnam at [email protected]. If you have any questions or concerns regarding your rights as a subject in this study, you may contact the Institutional Review Board (IRB) Chair, Scott Sehon at (207)725-3753 or at [email protected]. If Professor Sehon is not available, you can contact the IRB Administrator Jean Harrison at (207)798-7056 or at [email protected].
submitted by EducationalDecision to NonBinary [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 2a

Here is the first part of the second part (I ran out of characters again...) of a series of posts documenting the many differences between LISP 1.5 and Common Lisp. The preceding post can be found here.
In this part we're going to look at LISP 1.5's library of functions.
Of the 146 symbols described in The LISP 1.5 Programmer's Manual, sixty-two have the same names as standard symbols in Common Lisp. These symbols are enumerated here.
The symbols t and nil have been discussed already. The remaining symbols are operators. We can divide them into groups based on how semantics (and syntax) differ between LISP 1.5 and Common Lisp:
  1. Operators that have the same name but have quite different meanings
  2. Operators that have been extended in Common Lisp (e.g. to accept a variable number of arguments), but that otherwise have similar enough meanings
  3. Operators that have remained effectively the same
The third group is the smallest. Some functions differ only in that they have a larger domain in Common Lisp than in LISP 1.5; for example, the length function works on sequences instead of lists only. Such functions are pointed out below. All the items in this list should, given the same input, behave identically in Common Lisp and LISP 1.5. They all also have the same arity.
These are somewhat exceptional items on this list. In LISP 1.5, car and cdr could be used on any object; for atoms, the result was undefined, but there was a result. In Common Lisp, applying car and cdr to anything that is not a cons is an error. Common Lisp does specify that taking the car or cdr of nil results in nil, which was not a feature of LISP 1.5 (it comes from Interlisp).
Common Lisp's equal technically compares more things than the LISP 1.5 function, but of course Common Lisp has many more kinds of things to compare. For lists, symbols, and numbers, Common Lisp's equal is effectively the same as LISP 1.5's equal.
In Common Lisp, expt can return a complex number. LISP 1.5 does not support complex numbers (as a first class type).
As mentioned above, Common Lisp extends length to work on sequences. LISP 1.5's length works only on lists.
It's kind of a technicality that this one makes the list. In terms of functionality, you probably won't have to modify uses of return---in the situations in which it was used in LISP 1.5, it worked the same as it would in Common Lisp. But Common Lisp's definition of return is really hiding a huge difference between the two languages discussed under prog below.
As with length, this function operates on sequences and not only lists.
In Common Lisp, this function is deprecated.
LISP 1.5 defined setq in terms of set, whereas Common Lisp makes setq the primitive operator.
Of the remaining thirty-three, seven are operators that behave differently from the operators of the same name in Common Lisp:
  • apply, eval
The connection between apply and eval has been discussed already. Besides setq and prog or special or common, function parameters were the only way to bind variables in LISP 1.5 (the idea of a value cell was introduced by Maclisp); the manual describes apply as "The part of the interpreter that binds variables" (p. 17).
  • compile
In Common Lisp the compile function takes one or two arguments and returns three values. In LISP 1.5 compile takes only a single argument, a list of function names to compile, and returns that argument. The LISP 1.5 compiler would automatically print a listing of the generated assembly code, in the format understood by the Lisp Assembly Program or LAP. Another difference is that compile in LISP 1.5 would immediately install the compiled definitions in memory (and store a pointer to the routine under the subr or fsubr indicators of the compiled functions).
  • count, uncount
These have nothing to do withss Common Lisp's count. Instead of counting the number of items in a collection satisfying a certain property, count is an interface to the "cons counter". Here's what the manual says about it (p. 34):
The cons counter is a useful device for breaking out of program loops. It automatically causes a trap when a certain number of conses have been performed.
The counter is turned on by executing count[n], where n is an integer. If n conses are performed before the counter is turned off, a trap will occur and an error diagnostic will be given. The counter is turned off by uncount[NIL]. The counter is turned on and reset each time count[n] is executed. The counter can be turned on so as to continue counting from the state it was in when last turned off by executing count[NIL].
This counting mechanism has no real counterpart in Common Lisp.
  • error
In Common Lisp, error is part of the condition system, and accepts a variable number of arguments. In LISP 1.5, it has a single, optional argument, and of course LISP 1.5 had no condition system. It had errorset, which we'll discuss later. In LISP 1.5, executing error would cause an error diagnostic and print its argument if given. While this is fairly similar to Common Lisp's error, I'm putting it in this section since the error handling capabilities of LISP 1.5 are very limited compared to those of Common Lisp (consider that this was one of the only ways to signal an error). Uses of error in LISP 1.5 won't necessarily run in Common Lisp, since LISP 1.5's error accepted any object as an argument, while Common Lisp's error needs designators for a simple-error condition. An easy conversion is to change (error x) into (error "~A" x).
  • map
This function is quite different from Common Lisp's map. The incompatibility is mentioned in Common Lisp: The Language:
In MacLisp, Lisp Machine Lisp, Interlisp, and indeed even Lisp 1.5, the function map has always meant a non-value-returning version. However, standard computer science literature, including in particular the recent wave of papers on "functional programming," have come to use map to mean what in the past Lisp implementations have called mapcar. To simplify things henceforth, Common Lisp follows current usage, and what was formerly called map is named mapl in Common Lisp.
But even mapl isn't the same as map in LISP 1.5, since mapl returns the list it was given and LISP 1.5's map returns nil. Actually there is another, even larger incompatibility that isn't mentioned: The order of the arguments is different. The first argument of LISP 1.5's map was the list to be mapped and the second argument was the function to map over it. (The order was changed in Maclisp, likely because of the extension of the mapping functions to multiple lists.) You can't just change all uses of map to mapl because of this difference. You could define a function like map-1.5,such as
(defun map-1.5 (list function) (mapl function list) nil) 
and replace map with map-1.5 (or just shadow the name map).
  • function
This operator has been discussed earlier in this post.
Common Lisp doesn't need anything like LISP 1.5's function. However, mostly by coincidence, it will tolerate it in many cases; in particular, it works with lambda expressions and with references to global function definitions.
  • search
This function isn't really anything like Common Lisp's search. Here is how it is defined in the manual (p. 63, converted from m-expressions into Common Lisp syntax):
(defun search (x p f u) (cond ((null x) (funcall u x)) ((p x) (funcall f x)) (t (search (cdr x) p f u)))) 
Somewhat confusingly, the manual says that it searches "for an element that has the property p"; one might expect the second branch to test (get x p).
The function is kind of reminiscent of the testr function, used to exemplify LISP 1.5's indefinite scoping in the previous part.
  • special, unspecial
LISP 1.5's special variables are pretty similar to Common Lisp's special variables—but only because all of LISP 1.5's variables are pretty similar to Common Lisp's special variables. The difference between regular LISP 1.5 variables and special variables is that symbols declared special (using this special special special operator) have a value on their property list under the indicator special, which is used by the compiler when no binding exists in the current environment. The interpreter knew nothing of special variables; thus they could be used only in compiled functions. Well, they could be used in any function, but the interpreter wouldn't find the special value. (It appears that this is where the tradition of Lisp dialects having different semantics when compiled versus when interpreted began; eventually Common Lisp would put an end to the confusion.)
You can generally change special into defvar and get away fine. However there isn't a counterpart to unspecial. See also common.
Now come the operators that are essentially the same in LISP 1.5 and in Common Lisp, but have some minor differences.
  • append
The LISP 1.5 function takes only two arguments, while Common Lisp allows any number.
  • cond
In Common Lisp, when no test in a cond form is true, the result of the whole form is nil. In LISP 1.5, an error was signaled, unless the cond was contained within a prog, in which case it would quietly do nothing. Note that the cond must be at the "top level" inside the prog; cond forms at any deeper level will error if no condition holds.
  • gensym
The LISP 1.5 gensym function takes no arguments, while the Common Lisp function does.
  • get
Common Lisp's get takes three arguments, the last of which is a value to return if the symbol does not have the indicator on its property list; in LISP 1.5 get has no such third argument.
  • go
In LISP 1.5 go was allowed in only two contexts: (1) at the top level of a prog; (2) within a cond form at the top level of a prog. Later dialects would loosen this restriction, leading to much more complicated control structures. While progs in LISP 1.5 were somewhat limited, it is at least fairly easy to tell what's going on (e.g. loop conditions). Note that return does not appear to be limited in this way.
  • intern
In Common Lisp, intern can take a second argument specifying in what package the symbol is to be interned, but LISP 1.5 does not have packages. Additionally, the required argument to intern is a string in Common Lisp; LISP 1.5 doesn't really have strings, and so intern instead wants a pointer to a list of full words (of packed BCD characters; the print names of symbols were stored in this way).
  • list
In Common Lisp, list can take any number of arguments, including zero, but in LISP 1.5 it seems that it must be given at least one argument.
  • load
In LISP 1.5, load can't be given a filespec as an argument, for many reason. Actually, it can't be given anything as an argument; its purpose is simply to hand control over to the loader. The loader "expects octal correction cards, 704 row binary cards, and a transfer card." If you have the source code that would be compiled into the material to be loaded, then you can just put it in another file and use Common Lisp's load to load it in. But if you don't have the source code, then you're out of luck.
  • mapcon, maplist
The differences between Common Lisp and LISP 1.5 regarding these functions are similar to those for map given above. Both of these functions returned nil in LISP 1.5, and they took the list to be mapped as their first argument and the function to map as their second argument. A major incompatibility to note is that maplist in LISP 1.5 did what mapcar in Common Lisp does; Common Lisp's maplist is different.
  • member
In LISP 1.5, member takes none of the fancy keyword arguments that Common Lisp's member does, and returns only a truth value, not the tail of the list.
  • nconc
In LISP 1.5, this function took only two arguments; in Common Lisp, it takes any number.
  • prin1, print, terpri
In Common Lisp, these functions take an optional argument specifying an output stream to which they will send their output, but in LISP 1.5 prin1 and print take just one argument, and terpri takes no arguments.
  • prog
In LISP 1.5, the list of program variables was just that: a list of variables. No initial values could be provided as they can in Common Lisp; all the program variables started out bound to nil. Note that the program variables are just like any other variables in LISP 1.5 and have indefinite scope.
In the late '70s and early '80s, the maintainers of Maclisp and Lisp Machine Lisp wanted to add "naming" abilities to prog. You could say something like
(prog outer () ... (prog () (return ... outer))) 
and the return would jump not just out of the inner prog, but also out of the outer one. However, they ran into a problem with integrating a named prog with parts of the language that were based on prog. For example, they could add a special case to dotimes to handle an atomic first argument, since regular dotimes forms had a list as their first argument. But Maclisp's do had two forms: the older (introduced in 1969) form
(do atom initial step-form end-test body...) 
and the newer form, which was identical to Common Lisp's do. The older form was equivalent to
(do ((atom intitial step-form)) (end-test) body...) 
Since the older form was still supported, they couldn't add a special case for an atomic first argument because that was the normal case of the older kind of do. They ended up not adding named prog, owing to these kinds of difficulties.
However, during the discussion of how to make named prog work, Kent Pitman sent a message that contained the following text:
I now present my feelings on this issue of how DO/PROG could be done in order this haggling, part of which I think comes out of the fact that these return tags are tied up in PROG-ness and so on ... Suppose you had the following primitives in Lisp: (PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags. Returns () if you fall off the end. RETURN does not work from this form. (PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and if a RETURN-FROM specifying name (or just a RETURN) were executed, control would pass to here. Returns the value of form if form returns normally or the value returned from it if a RETURN or RETURN-FROM is executed. [Note: this is not a [*]CATCH because it is lexical in nature and optimized out by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT and UNNAMED-PROG-RETURN-POINT might be desirable – extrapolate for yourself how this would change things – I'll just present the basic idea here.] (ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow return or goto. All forms are evaluated. GO does not work to get to any form in the iteration body. So then we could just say that the definitions for PROG and DO might be (ignore for now old-DO's – they could, of course, be worked in if people really wanted them but they have nothing to do with this argument) ... (PROG [  ]  . ) => (PROG-RETURN-POINT (LET  (PROG-BODY . )) [  ]) (DO [  ]   . ) => (PROG-RETURN-POINT (ITERATE   (PROG-BODY . )) [  ]) Other interesting combinations could be formed by those interested in them. If these lower-level primitives were made available to the user, he needn't feel tied to one of PROG/DO – he can assemble an operator with the functionality he really wants. 
Two years later, Pitman would join the team developing the Common Lisp language. For a little while, incorporating named prog was discussed, which eventually led to the splitting of prog in quite a similar way to Pitman's proposal. Now prog is a macro, simply combining the three primitive operators let, block, and tagbody. The concept of the tagbody primitive in its current form appears to have been introduced in this message, which is a writeup by David Moon of an idea due to Alan Bawden. In the message he says
The name could be GO-BODY, meaning a body with GOs and tags in it, or PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning something inside of which GO may be used. I don't care; suggestions anyone?
Guy Steele, in his proposed evaluator for Common Lisp, called the primitive tagbody, which stuck. It is a little bit more logical than go-body, since go is just an operator and allowed anywhere in Common Lisp; the only special thing about tagbody is that atoms in its body are treated as tags.
  • prog2
In LISP 1.5, prog2 was really just a function that took two arguments and returned the result of the evaluation of the second one. The purpose of it was to avoid having to write (prog () ...) everywhere when all you want to do is call two functions. In later dialects, progn would be introduced and the "implicit progn" feature would remove the need for prog2 used in this way. But prog2 stuck around and was generalized to a special operator that evaluated any number of forms, while holding on to the result of the second one. Programmers developed the (prog2 nil ...) idiom to save the result of the first of several forms; later prog1 was introduced, making the idiom obsolete. Nowadays, prog1 and prog2 are used typically for rather special purposes.
Regardless, in LISP 1.5 prog2 was machine-coded subroutine that was equivalent to the following function definition in Common Lisp:
(defun prog2 (one two) two) 
  • read
The read function in LISP 1.5 did not take any arguments; Common Lisp's read takes four. In LISP 1.5, read read either from "SYSPIT" or from the punched carded reader. It seems that SYSPIT stood for "SYStem Paper (maybe Punched) Input Tape", and that it designated a punched tape reader; alternatively, it might designate a magnetic tape reader, but the manual makes reference to punched cards. But more on input and output later.
  • remprop
The only difference between LISP 1.5's remprop and Common Lisp's remprop is that the value of LISP 1.5's remprop is always nil.
  • setq
In Common Lisp, setq takes an arbitrary even number of arguments, representing pairs of symbols and values to assign to the variables named by the symbols. In LISP 1.5, setq takes only two arguments.
  • sublis
LISP 1.5's sublis and subst do not take the keyword arguments that Common Lisp's sublis and subst take.
  • trace, untrace
In Common Lisp, trace and untrace are operators that take any number of arguments and trace the functions named by them. In LISP 1.5, both trace and untrace take a single argument, which is a list of the functions to trace.

Functions not in Common Lisp

We turn now to the symbols described in the LISP 1.5 Programmer's Manual that don't appear in Common Lisp. Let's get the easiest case out of the way first: Here are all the operators in LISP 1.5 that have a corresponding operator in Common Lisp, with notes about differences in functionality where appropriate.
  • add1, sub1
These functions are the same as Common Lisp's 1+ and 1- in every way, down to the type genericism.
  • conc
This is just Common Lisp's append, or LISP 1.5's append extended to more than two arguments.
  • copy
Common Lisp's copy-list function does the same thing.
  • difference
This corresponds to -, although difference takes only two arguments.
  • divide
This function takes two arguments and is basically a consing version of Common Lisp's floor:
(divide x y) = (values-list (floor x y)) 
  • digit
This function takes a single argument, and is like Common Lisp's digit-char-p except that the radix isn't variable, and it returns a true or false value only (and not the weight of the digit).
  • efface
This function deletes the first appearance of an item from a list. A call like (efface item list) is equivalent to the Common Lisp code (delete item list :count 1).
  • greaterp, lessp
These correspond to Common Lisp's > and <, although greaterp and lessp take only two arguments.
As a historical note, the names greaterp and lessp survived in Maclisp and Lisp Machine Lisp. Both of those languages had also > and <, which were used for the two-argument case; Common Lisp favored genericism and went with > and < only. However, a vestige of the old predicates still remains, in the lexicographic ordering functions: char-lessp, char-greaterp, string-lessp, string-greaterp.
  • minus
This function takes a single argument and returns its negation; it is equivalent to the one-argument case of Common Lisp's -.
  • leftshift
This function is the same as ash in Common Lisp; it takes two arguments, m and n, and returns m×2n. Thus if the second argument is negative, the shift is to the right instead of to the left.
  • liter
This function is identical in essence to Common Lisp's alpha-char-p, though more precisely it's closer to upper-case-p; LISP 1.5 was used on computers that made no provision for lowercase characters.
  • pair
This is equivalent to the normal, two-argument case of Common Lisp's pairlis.
  • plus
This function takes any number of arguments and returns their sum; its Common Lisp counterpart is +.
  • quotient
This function is equivalent to Common Lisp's /, except that quotient takes only two arguments.
  • recip
This function is equivalent to the one-argument case of Common Lisp's /.
  • remainder
This function is equivalent to Common Lisp's rem.
  • times
This function takes any number of arguments and returns their product; its Common Lisp counterpart is *.
Part 2b will be posted in a few hours probably.
submitted by kushcomabemybedtime to lisp [link] [comments]

A guide on hitting Legend in Comp

Crossposting this from /crucibleplaybook, figured some people on here might find this helpful as well. A lot of this post applies to all Comp, not just hitting Legend.
I’ve been seeing a lot of posts lately asking for tips for hitting Legend in Comp so I figured I’d put together a brief guide for anyone that’s interested. I’m happy to see so many people interested in hitting Legend!
Intro
First off, a little bit about me. I never played D1 so I had a rough first few months of D2Y1 (and a rough first few weeks of Y2 with the new special weapon uptime and TTK) as it was my first time with the Destiny franchise. But even then had a blast in Crucible and I always wanted to get better. I’m also an extremely competitive person so that helped fuel my desire for improvement.
I play on Xbox and I just got my Unbroken title this season so I’ve been to Legend 3 times (S4, S6, S7). I’ve learned a ton along the way and hitting Legend each season has been easier and more enjoyable than the previous one for a variety of reasons that I’ll share in this post.
Improvement Mindset
While your end goal is to hit Legend, focusing on this binary goal isn’t a good idea. A better approach is to think of playing Comp with the main goal of improving both as a player and a team. With this more open and long-term mindset, you will improve rapidly as a player, win more often, and have a much more enjoyable experience as a result.
When you focus on something as binary as hitting a certain rank, every game or even decision within a game starts to feel tense and you put an immense amount of artificial pressure on yourself. This often builds over the course of a game. Even if it’s subconscious, it will effect your play. You’ll play too passive, too aggressive, and/or make bad decisions. Your brain will be too wrapped up playing out the consequences of failure to focus on what you should be doing to give yourself and your team the best chances of winning. It’s been scientifically proven both in real sports and in E-Sports that tension leads to poor performance.
Instead, take every engagement and every game as an opportunity to learn something and to improve. You WILL start getting your ass kicked at some point, it’s just a matter of when. It might be at 3k and it might not be until 5k, but at some point it’ll happen. And when it does, the best thing to do is to record your gameplay and watch it back.
Gameplay Review
You can easily record your gameplay via Twitch by streaming and having it save past broadcasts. Then you can watch your gameplay there, or you can take it a step further and download your gameplay and run it through a free video editing program such as DaVinci Resolve or iMovie. The advantage of doing it this way is you can better control the playback and even view it frame-by-frame.
I’d recommend picking a game that you performed poorly and watch it once all the way through and take some mental notes. Then I’d watch it again, noting each death with why you died and what you could have done better to either kill your opponent first or escape safely. Even if you died to a Wardcliff or a solo super, write down something you could have done differently to prevent dying. Then categorize and tally them the best you can. The most frequent ones are what you should focus on getting better at. This can be during your next Comp session or QP/Rumble.
The reason reviewing your gameplay is so important is it’ll help speed up your rate of improvement and help you get past your current plateau a bit faster. Games in high comp tend to be very fast paced so it can be hard to think about or remember exactly what happened. Or what you think happened in the moment wasn’t what really happened and the gameplay review will show you this.
While it’s certainly possible to improve naturally and over time, recording and reviewing your gameplay will make you improve faster.
Playing the meta
A lot of people seem reluctant to use meta loadouts for whatever reason. I think most of it boils down to either wanting to be unique, or having a superiority complex by refusing to use certain good or easy to use weapons and strategies because they’re “cheap” or too easy. Throw all of this out the window.
There’s nothing cheap in Comp (other than DDoSing which is actual cheating and we won’t discuss it). There’s nothing that takes “no skill” to use. If it’s in the game then it’s fair game to be used as much and as effectively as possible. Everything has a counter. If you don’t believe this then you probably have a scrub mentality and it’s going to hold you back. There are some great posts about scrub mentality on this very sub.
Meta loadouts or weapons are usually the perfect cross section of both lethality and ease-of-use - USE THEM. This is the time and the place. Your opponents are trying to win at all costs and so should you.
I don’t want to go too much into detail here or debate here, but in general these are the best options for high comp on Console (4k+). They’re ranked in terms of effectiveness, so it’s probably better to improve with something at the top of the list than use something at the bottom.
Primary Weapons: * Luna (NF if you have it already) * Adaptive or Aggressive pulse rifles * Ace/Thorn/TLW * Very well rolled Legendary HC * Jade Rabbit/Mida/Polaris Lance (large maps only)
Special Weapons: * Aggressive or Precision frame Shotgun (MindbendeToil/Imperial Decree/DRB/Retold Tale) * Erentil or Wizened Rebuke * Beloved/Twilight Oath/Supremacy/Revoker
Heavy Weapons: * Wardcliff * Truth * PotG * Any rocket launcher
Subclasses: * Hunter - middle void, middle or bottom arc * Titan - bottom void or bottom arc * Warlock - top arc or bottom solar
Exotics: * Stompees for Hunter * OEM or Antaeus Wards for Titan * Transversive Steps for Warlock
Mods: * 3+ super mods * 1-2 paragon mods for hunter if desired * 1-2 grenade mods for stormcaller or sentinel if desired * Otherwise 5 super mods
Stats: * Minimum of 1 resilience with as little as possible (Titans min is 3 or 4 I think). The rest goes to mobility and/or recovery. I’d recommend 6+ mobility for most people, but some prefer a lower mobility and higher recovery.
I don’t really want to debate what else is meta or what’s the best or other specifics. But in my experience both playing and watching others play high comp, this is the meta.
For weapons, Luna and a shotgun is still the best and most versatile loadout for most people and most maps. Consider swapping to a pulse or scout instead of Luna (or a sniper instead of a shotgun) for larger maps. Especially for countdown, consider having at least one sniper on your team as being able to get a pick and play 4v3 puts your team at a huge advantage. Fusion rifles are also incredibly strong right now. You can basically treat one like your primary weapon and just use your actual primary to clean people up or shoot people past ~30m.
In the current meta supers are incredibly important. You want to use them frequently and make orbs for your teammates for them to pick up and vice versa. Try to use your super when the enemy team doesn’t have any supers ready or heavy ammo is about to be up. Coordinate with your teammates on who’s popping a super and when so you don’t double pop and your teammates can get heavy, map control, and shoot the enemies running away from you.
I’ve gotten some questions on why so little resilience so I’ll answer it here. You’re going to die to supers, heavy ammo, and special weapons a lot more than primaries. Your resilience won’t really matter against those things. Plus the primaries you do see in high comp (mostly NF) don’t get effected by resilience. And even the other ones that you’ll occasionally see, resilience doesn’t really change the TTK, it only requires more headshots instead of body shots. At this level most players will be hitting their headshots anyways. Resilience was much more important in Y1 when there was a lot of primary weapon uptime.
The only time I’d recommend a higher resilience is if you’re on a Titan with OEM (to supplement recovery) and prefer low mobility. 7+ resilience will cause Erentil to take 5 bolts instead of 4 and might occasionally make a shotgun need to hit an extra pellet out of the spread to kill you (10 pellets of the 12, instead of 9 of 12 for example), among a couple of other minor advantages. I still wouldn’t really recommend it as I think you get more overall usage out of high recovery, but I’ve seen some people in high comp make it work.
Controlling heavy ammo wins games. Titans can use their barricade to pull heavy even while the other team is laning it. Prioritize getting the heavy and preventing your opponents from getting it. Once you get it, use it and don’t die with it. I’d recommend using it quickly but if you’re running Wardcliff it’s not a bad idea to save a rocket for an opponents super.
Finding Teammates
One of the most important parts of hitting Legend is having quality teammates. And by quality I mean both skill and temperament. Unless you already have a large friends list filled with quality teammates, you’ll need to network to find some. You can do this both in-game and using LFG. You can solo queue with a decent amount of success until about 3.5k or so, then you’ll want to start forming a team. If you seem to gel with teammates when solo queuing, shoot them a message and ask if they’d like to team up.
As far as LFG goes, there are lots of LFG websites these days. I’ve personally had a lot of success with Xbox’s built in LFG system. LFG can get a bad rep at times which is understandable. Some people are toxic, can tilt easily, blame teammates, complain all the time, not very skilled, etc. You obviously want to avoid these types of people and instead find teammates that are skilled, chill, encouraging and fun to play with. The best way to do this is to host the LFG group yourself by making the post and weeding people out. I’m not going to debate if/how important KD is to determine someone’s skill and if/what minimum you should ask for, use your own discretion here.
Once you get a team, just start playing. It might take a game or two for everyone to start to feel more comfortable with one another based on playstyles, tendencies, personalities, communication, etc. If things are going well after 4 or 5 games, keep playing. If they keep going well, add them to your friends list and ask them to do the same. If the games are not going well, you don’t seem to be playing together well as a team, and/or your personalities don’t seem to fit, consider politely excusing yourself and forming a new group. There’s absolutely nothing wrong with doing this. Sometimes the team is just not a good fit for whatever reason, it’s best for everyone to just move on with no hard feelings.
And by games going well I don’t necessarily mean winning. Are you guys teamshotting well? Baiting and switching effectively? Controlling the power ammo? Timing super usage? Moving together as a team? Playing complimentary angels and watching each other’s back? All of these things are good signs of a team working well. One of the best indicators is the number of assists you’re getting as a team (these can be looked up on any 3rd party website).
If your team is playing well together over a long session, like I said, add them and ask if they’ll do the same. Next time you get on, ask if they want to play before looking for a group via LFG. Sometimes they’ll even have friends that want to play as well which is great! Add anyone and everyone you play well with and seem to be on the same page with both in-game and personality wise. Rinse and repeat and you’ll have a solid list of friends to play Comp with. If you keep networking you can grow your friends list very quickly and effectively. You can also use Discord to schedule comp sessions.
The best way to attract good teammates is to be the best teammate you can. Be the teammate that you’d want on your team every single game and make things easy on your teammates. Hype them up for making good plays and encourage them if they make a bad one. Team shoot, make good callouts, don’t tilt, etc. Anything you’d look for in a good teammate, try to do that yourself and you’ll attract some great people to play with.
Always warm up before playing Comp and make sure your teammates have too. Rumble or QP is fine, but even a quick 10 minute private match rumble with your comp team can help warm up and build some camaraderie.
Closing Thoughts
Reaching Legend in Comp is seen by most as a daunting task and not how it should be seen - a huge accomplishment. Most people won’t even attempt to try for a variety of reasons ranging from pride to not enough reward to time and effort. High Comp is very challenging and honestly a much different game than QP or low Comp. It can be frustrating and stressful. But if you think of it as playing to improve and become the best player you can instead of just hitting Legend, then it’ll be very well worth it. Drastically improving as a player and as a result eventually hitting Legend is by far the best feeling in the entire game.
You might not even get there this season but that’s okay! But by having an improvement mindset and improving as a player, you’ll have a leg up next season - just stick with it and you’ll get there.
My final parting piece of advice is to just enjoy the journey. You’ll lose some close games and you’ll win some close games. You’ll get blown out by streamers or recovs and you’ll surprise yourself and beat some teams that are much better than you. Don’t sweat any of the losses, just enjoy playing the game. At the end of the day, this is a video game that we all play for fun.
One thing to keep in mind, especially once you get past 5k and are making that final push, you’re playing against some of the best players in the world and many of them play Comp for a living or it’s literally all they do. For most of us this is just one of many hobbies that we do for fun in our spare time, so don’t get too upset when you lose to these teams.
Thanks for reading - good luck and have fun! I’d be happy to answer any questions that anybody has.
Cheers!
submitted by Keetonicc to DestinyTheGame [link] [comments]

A guide on hitting Legend rank in Comp

I’ve been seeing a lot of posts lately asking for tips for hitting Legend in Comp so I figured I’d put together a brief guide for anyone that’s interested. I’m happy to see so many people interested in hitting Legend! A lot of this post applies to comp in general, not necessarily just hitting Legend.
Intro
First off, a little bit about me. I never played D1 so I had a rough first few months of D2Y1 (and a rough first few weeks of Y2 with the new special weapon uptime and TTK) as it was my first time with the Destiny franchise. But even then had a blast in Crucible and I always wanted to get better. I’m also an extremely competitive person so that helped fuel my desire for improvement.
I play on Xbox and I just got my Unbroken title this season so I’ve been to Legend 3 times (S4, S6, S7). I’ve learned a ton along the way and hitting Legend each season has been easier and more enjoyable than the previous one for a variety of reasons that I’ll share in this post.
Improvement Mindset
While your end goal is to hit Legend, focusing on this binary goal isn’t a good idea. A better approach is to think of playing Comp with the main goal of improving both as a player and a team. With this more open and long-term mindset, you will improve rapidly as a player, win more often, and have a much more enjoyable experience as a result.
When you focus on something as binary as hitting a certain rank, every game or even decision within a game starts to feel tense and you put an immense amount of artificial pressure on yourself. This often builds over the course of a game. Even if it’s subconscious, it will effect your play. You’ll play too passive, too aggressive, and/or make bad decisions. Your brain will be too wrapped up playing out the consequences of failure to focus on what you should be doing to give yourself and your team the best chances of winning. It’s been scientifically proven both in real sports and in E-Sports that tension leads to poor performance.
Instead, take every engagement and every game as an opportunity to learn something and to improve. You WILL start getting your ass kicked at some point, it’s just a matter of when. It might be at 3k and it might not be until 5k, but at some point it’ll happen. And when it does, the best thing to do is to record your gameplay and watch it back.
Gameplay Review
You can easily record your gameplay via Twitch by streaming and having it save past broadcasts. Then you can watch your gameplay there, or you can take it a step further and download your gameplay and run it through a free video editing program such as DaVinci Resolve or iMovie. The advantage of doing it this way is you can better control the playback and even view it frame-by-frame.
I’d recommend picking a game that you performed poorly and watch it once all the way through and take some mental notes. Then I’d watch it again, noting each death with why you died and what you could have done better to either kill your opponent first or escape safely. Even if you died to a Wardcliff or a solo super, write down something you could have done differently to prevent dying. Then categorize and tally them the best you can. The most frequent ones are what you should focus on getting better at. This can be during your next Comp session or QP/Rumble.
The reason reviewing your gameplay is so important is it’ll help speed up your rate of improvement and help you get past your current plateau a bit faster. Games in high comp tend to be very fast paced so it can be hard to think about or remember exactly what happened. Or what you think happened in the moment wasn’t what really happened and the gameplay review will show you this.
While it’s certainly possible to improve naturally and over time, recording and reviewing your gameplay will make you improve faster.
Playing the meta
A lot of people seem reluctant to use meta loadouts for whatever reason. I think most of it boils down to either wanting to be unique, or having a superiority complex by refusing to use certain good or easy to use weapons and strategies because they’re “cheap” or too easy. Throw all of this out the window.
There’s nothing cheap in Comp (other than DDoSing which is actual cheating and we won’t discuss it). There’s nothing that takes “no skill” to use. If it’s in the game then it’s fair game to be used as much and as effectively as possible. Everything has a counter. If you don’t believe this then you probably have a scrub mentality and it’s going to hold you back. There are some great posts about scrub mentality on this very sub.
Meta loadouts or weapons are usually the perfect cross section of both lethality and ease-of-use - USE THEM. This is the time and the place. Your opponents are trying to win at all costs and so should you.
I don’t want to go too much into detail here or debate here, but in general these are the best options for high comp on Console (4k+). They’re ranked in terms of effectiveness, so it’s probably better to improve with something at the top of the list than use something at the bottom.
Primary Weapons: * Luna (NF if you have it already) * Adaptive or Aggressive pulse rifles * Ace/Thorn/TLW * Very well rolled Legendary HC * Jade Rabbit/Mida/Polaris Lance (large maps only)
Special Weapons: * Aggressive or Precision frame Shotgun (MindbendeToil/Imperial Decree/DRB/Retold Tale) * Erentil or Wizened Rebuke * Beloved/Twilight Oath/Supremacy/Revoker
Heavy Weapons: * Wardcliff * Truth * PotG * Any rocket launcher
Subclasses: * Hunter - middle void, middle or bottom arc * Titan - bottom void or bottom arc * Warlock - top arc or bottom solar
Exotics: * Stompees for Hunter * OEM or Antaeus Wards for Titan * Transversive Steps for Warlock
Mods: * 3+ super mods * 1-2 paragon mods for hunter if desired * 1-2 grenade mods for stormcaller or sentinel if desired * Otherwise 5 super mods
Stats: * Minimum of 1 resilience with as little as possible (Titans min is 3 or 4 I think). The rest goes to mobility and/or recovery. I’d recommend 6+ mobility for most people, but some prefer a lower mobility and higher recovery.
I don’t really want to debate what else is meta or what’s the best or other specifics. But in my experience both playing and watching others play high comp, this is the meta.
For weapons, Luna and a shotgun is still the best and most versatile loadout for most people and most maps. Consider swapping to a pulse or scout instead of Luna (or a sniper instead of a shotgun) for larger maps. Especially for countdown, consider having at least one sniper on your team as being able to get a pick and play 4v3 puts your team at a huge advantage. Fusion rifles are also incredibly strong right now. You can basically treat one like your primary weapon and just use your actual primary to clean people up or shoot people past ~30m.
In the current meta supers are incredibly important. You want to use them frequently and make orbs for your teammates for them to pick up and vice versa. Try to use your super when the enemy team doesn’t have any supers ready or heavy ammo is about to be up. Coordinate with your teammates on who’s popping a super and when so you don’t double pop and your teammates can get heavy, map control, and shoot the enemies running away from you.
I’ve gotten some questions on why so little resilience so I’ll answer it here. You’re going to die to supers, heavy ammo, and special weapons a lot more than primaries. Your resilience won’t really matter against those things. Plus the primaries you do see in high comp (mostly NF) don’t get effected by resilience. And even the other ones that you’ll occasionally see, resilience doesn’t really change the TTK, it only requires more headshots instead of body shots. At this level most players will be hitting their headshots anyways. Resilience was much more important in Y1 when there was a lot of primary weapon uptime.
The only time I’d recommend a higher resilience is if you’re on a Titan with OEM (to supplement recovery) and prefer low mobility. 7+ resilience will cause Erentil to take 5 bolts instead of 4 and might occasionally make a shotgun need to hit an extra pellet out of the spread to kill you (10 pellets of the 12, instead of 9 of 12 for example), among a couple of other minor advantages. I still wouldn’t really recommend it as I think you get more overall usage out of high recovery, but I’ve seen some people in high comp make it work.
Controlling heavy ammo wins games. Titans can use their barricade to pull heavy even while the other team is laning it. Prioritize getting the heavy and preventing your opponents from getting it. Once you get it, use it and don’t die with it. I’d recommend using it quickly but if you’re running Wardcliff it’s not a bad idea to save a rocket for an opponents super.
Finding Teammates
One of the most important parts of hitting Legend is having quality teammates. And by quality I mean both skill and temperament. Unless you already have a large friends list filled with quality teammates, you’ll need to network to find some. You can do this both in-game and using LFG. You can solo queue with a decent amount of success until about 3.5k or so, then you’ll want to start forming a team. If you seem to gel with teammates when solo queuing, shoot them a message and ask if they’d like to team up.
As far as LFG goes, there are lots of LFG websites these days. I’ve personally had a lot of success with Xbox’s built in LFG system. LFG can get a bad rep at times which is understandable. Some people are toxic, can tilt easily, blame teammates, complain all the time, not very skilled, etc. You obviously want to avoid these types of people and instead find teammates that are skilled, chill, encouraging and fun to play with. The best way to do this is to host the LFG group yourself by making the post and weeding people out. I’m not going to debate if/how important KD is to determine someone’s skill and if/what minimum you should ask for, use your own discretion here.
Once you get a team, just start playing. It might take a game or two for everyone to start to feel more comfortable with one another based on playstyles, tendencies, personalities, communication, etc. If things are going well after 4 or 5 games, keep playing. If they keep going well, add them to your friends list and ask them to do the same. If the games are not going well, you don’t seem to be playing together well as a team, and/or your personalities don’t seem to fit, consider politely excusing yourself and forming a new group. There’s absolutely nothing wrong with doing this. Sometimes the team is just not a good fit for whatever reason, it’s best for everyone to just move on with no hard feelings.
And by games going well I don’t necessarily mean winning. Are you guys teamshotting well? Baiting and switching effectively? Controlling the power ammo? Timing super usage? Moving together as a team? Playing complimentary angels and watching each other’s back? All of these things are good signs of a team working well. One of the best indicators is the number of assists you’re getting as a team (these can be looked up on any 3rd party website).
If your team is playing well together over a long session, like I said, add them and ask if they’ll do the same. Next time you get on, ask if they want to play before looking for a group via LFG. Sometimes they’ll even have friends that want to play as well which is great! Add anyone and everyone you play well with and seem to be on the same page with both in-game and personality wise. Rinse and repeat and you’ll have a solid list of friends to play Comp with. If you keep networking you can grow your friends list very quickly and effectively. You can also use Discord to schedule comp sessions.
The best way to attract good teammates is to be the best teammate you can. Be the teammate that you’d want on your team every single game and make things easy on your teammates. Hype them up for making good plays and encourage them if they make a bad one. Team shoot, make good callouts, don’t tilt, etc. Anything you’d look for in a good teammate, try to do that yourself and you’ll attract some great people to play with.
Always warm up before playing Comp and make sure your teammates have too. Rumble or QP is fine, but even a quick 10 minute private match rumble with your comp team can help warm up and build some camaraderie.
Closing Thoughts
Reaching Legend in Comp is seen by most as a daunting task and not how it should be seen - a huge accomplishment. Most people won’t even attempt to try for a variety of reasons ranging from pride to not enough reward to time and effort. High Comp is very challenging and honestly a much different game than QP or low Comp. It can be frustrating and stressful. But if you think of it as playing to improve and become the best player you can instead of just hitting Legend, then it’ll be very well worth it. Drastically improving as a player and as a result eventually hitting Legend is by far the best feeling in the entire game.
You might not even get there this season but that’s okay! But by having an improvement mindset and improving as a player, you’ll have a leg up next season - just stick with it and you’ll get there.
My final parting piece of advice is to just enjoy the journey. You’ll lose some close games and you’ll win some close games. You’ll get blown out by streamers or recovs and you’ll surprise yourself and beat some teams that are much better than you. Don’t sweat any of the losses, just enjoy playing the game. At the end of the day, this is a video game that we all play for fun.
One thing to keep in mind, especially once you get past 5k and are making that final push, you’re playing against some of the best players in the world and many of them play Comp for a living or it’s literally all they do. For most of us this is just one of many hobbies that we do for fun in our spare time, so don’t get too upset when you lose to these teams.
Thanks for reading - good luck and have fun! I’d be happy to answer any questions that anybody has.
Cheers!
submitted by Keetonicc to CruciblePlaybook [link] [comments]

[OC] Predicting the 2019-20 Coach of the Year

For those interested, this is part of a very long blog post here where I explain my entire thought process and methodology.
This post also contains a series of charts linked to here.

Introduction

Machine Learning models have been used to predict everything in basketball from the All Star Starters to James Harden’s next play. One model that has never been made is a successful Coach of the Year Predictor. The goal of this project is to create such a model.
Of course, creating such a model is challenging because, ultimately, the COY is awarded via voting and inherently adds a human element. As we will discover in this post, accounting for these human elements (e.g. recency bias, weighing storylines, climate around the team) makes it quite challenging. Having said this, I demonstrate how we can gain insight into what voters have valued in the past, allowing us to propose the most likely candidates quite accurately.

Methods

Data Aggregation

First, I created a database of all the coaches referred to in Basketball Reference's coachs index
Coach statistics were acquired from the following template url:
f'https://www.basketball- reference.com/leagues/NBA_{season_end_year}_coaches.html'
Team statistics were acquired from the following template url:
f'https://www.basketball- reference.com/teams/{team_abbreviation}/{season_end_year}.html'
I leveraged the new basketball-reference-scraper Python module to simplify the process.
After some data engineering that I describe completely in the post, I settled on the following features.
Non numerical data Coach Statistics Team Data
COACH SEASONS WITH FRANCHISE SEASON
TEAM SEASONS OVERALL FG
CURRENT SEASON GAMES FGA
CURRENT SEASON WINS FG%
FRANCHISE SEASON GAMES 3P
FRANCHISE SEASON WINS 3PA
CAREER SEASON GAMES 3P%
CAREER SEASON WINS 2P
FRANCHISE PLAYOFF GAMES 2PA
FRANCHISE PLAYOFF WINS 2P%
CAREER PLAYOFF GAMES FT
CAREER PLAYOFF WINS FTA
COY FT%
ORB
DRB
TRB
AST
STL
BLK
TOV
PF
PTS
OPP_G
OPP_FG
OPP_FGA
OPP_FG%
OPP_3P
OPP_3PA
OPP_3P%
OPP_2P
OPP_2PA
OPP_2P%
OPP_FT
OPP_FTA
OPP_FT%
OPP_ORB
OPP_DRB
OPP_TRB
OPP_AST
OPP_STL
OPP_BLK
OPP_TOV
OPP_PF
OPP_PTS
AGE
PW
PL
MOV
SOS
SRS
ORtg
DRtg
NRtg
PACE
FTr
TS%
eFG%
TOV%
ORB%
FT/FGA
OPP_eFG%
OPP_TOV%
DRB%
OPP_FT/FGA
For obtaining a full description of each statistic, please refer to Basketball Reference's glossary.

Data Exploration

First, I computed the correlation between the COY label and all the other features and sorted them. Here are some of the top statistics that correlate with the award along with their Pearson correlation coefficient.
Statistic Pearson coefficient
CURRENT SEASON WINS 0.21764609944203592
SRS 0.20748396385759718
MOV 0.20740447792956693
NRtg 0.20613382194841318
PW 0.20282119218684597
PL -0.19850434198291064
DRtg -0.12967106743277185
ORtg 0.11896730313375109
As expected, the one of the most important features appears to be CURRENT SEASON WINS.
It is interesting the PW and PL correlate so much. This correlation indicates that not only does performance matter, but disparity between expected performance and reality matters significantly as well.
The weight put towards SRS, MOV, and NRtg also provide insight into how the COY is selected. Apparently, not only does it matter whether a team wins or not, but it also matters how they win. For example, the Bucks are typically winning games at an average of ~13 ppg this year, which would heavily favor them.
The high weight toward SRS(defined as a rating that takes into account average point differential and strength of schedule) indicates that it is even more important how a team performs against other challenging opponents. For example, no one should and does care about the Bucks crushing the Warriors, but they should care if they beat the Lakers.
Let's explore the CURRENT SEASON WINS statistic a little more using a box plot.
Box Plot
It appears coaches need to win ~50+ games for an 82 game season in order to be eligible. The exception being Mike Dunleavy’s minimum win season, there were only 50 games since it was a lockout season. Hence, that explains the outlier case.
Another interesting data point is the unfortunate coach who won the most games, but did not win the award. This turned out to be Phil Jackson, one year after his 72 win season in 1995-96 appeared to underperform by winning only 69 games. This, once again, indicates that the COY award takes into account historical performance. Who won instead? Pat Riley, with 61 wins.
Here are some histograms of the MOV and SRS where the blue plots indicate COY's and orange plots indicate NON-COY's.
As expected, COY’s are expected to dominate their teams and not just defeat them.

Oversampling

Before we begin, there is one key flaw in our dataset to look into. Namely, the two classes are not balanced at all.
Looking into the counts we have 1686 non-COY's and 43 COY's (as expected). This disparity can lead to a bad model, so how did I fix this?

SMOTE Oversampling

SMOTE (Synthetic Minority Over-sampling Technique) is a method of oversampling to even the distribution of the two classes. SMOTE takes a random sample from the minority class (COY=1 in our case) and computes it k-nearest neighbors. It chooses one of the neighbors and computes the vector between the sample and the neighbor. Next, it multiplies this vector by a random number between 0 and 1 and adds the vector to the original random sample to obtain a new data point.
See more details here.

Model Selection and Metrics

For this binary classification problem, we'll use 5 different models. Each model had its hyperparameters fine tuned using Grid Search Cross Validation to provide the best metrics. Here are all the models with a short description of each one: * Decision Tree Classifier - with Shannon's entropy as the criterion and a maximum depth of 37. * Random Forest Classifier - using the gini index as the criterion, maximum depth of 35 and maximum number of features of 5. * Logistic Classifier - using the simple ordinary least squares method * Support Vector Machine - with a linear kernel and C=1000 * Neural Network - a simple 6 layer network consisting of 80, 40, 20, 10, 5, 1 nodes, respectively (chosen to correspond with the number of features). I also used early stopping and 20% dropouts on each layer to prevent overfitting.
The metrics that will be used to evaluate our models are: NOTE that: TP=True Positives (Predicted COY and was a COY), TN=True Negatives (Predicted Not COY and was Not COY), FP=False Positives (Predicted COY and was Not COY), FN=False Negatives (Predicted Not COY and was COY)
  • Accuracy - % of correctly categorized instances ; Accuracy = (TP+TN)/(TP+TN+FP+FN)
  • Recall - Ability to categorize (+) class (COY) ; Recall = TP/(TP+FN)
  • Precision - How many of TP were correct ; Precision = TP/(TP+FP)
  • F1 - Balances Precision and Recall ; F1 = 2(Precision * Recall) / (Precision + Recall)

Results

Model Accuracy Recall Precision F1
Decision Tree 0.963 0.977 0.952 0.964
Random Forest 0.985 0.997 0.974 0.986
Logistic 0.920 0.980 0.870 0.922
SVC 0.959 0.991 0.932 0.960
Neural Network 0.898 1.0 0.833 0.909
In terms of all metrics the Random Forest outperforms all. Moreover, the Random Forest boasts an extremely high recall which is our most important metric. When predicting the Coach of the Year, we want to be able to predict the positive class best, which is indicated by a high recall.

Confusion Matrices

Confusion Matrices are another way of visualization our models' performances. Confusion Matrices are nxn matrices where the columns represent the actual class and the rows represent the class predicted by the model.
In the case of a binary classification problem, we obtain a 2x2 matrix with the true positives (bottom right), true negatives (top left), false positive (top right), and false negatives (bottom left).
Here are the confusion matrices for the Decision Tree, Random Forest, Logistic Classifier, SVC, and Neural Network.
Looking at the confusion matrices we can clearly see the disparity between the Random Forest Classifier and other classifiers. Evidently, the Random Forest Classifier is the best option.

Random Forest Evaluation

So what made the Random Forest so good? What features did it use that enabled it to make such accurate predictions?
I charted the feature importances of the Random Forest and plotted them in order here.
Here are some explicit numbers:
Feature % Contribution
CURRENT SEASON WINS 6.569329857214043
SRS 6.368785568654217
PW 6.059094690243399
NRtg 5.5519116066060175
MOV 4.473122672559081
PL 3.643349558354282
... ...
See more in my blog post.
I found it, once again, interesting that SRS such an important feature. It appears that the Random Forest took the correlation predicted earlier into account.
However, we see that other statistics matter significantly too, like CURRENT SEASON WINS, NRtg, and MOV as we predicted.
Something one wouldn’t anticipate is the contribution of factors outside of this season like FRANCHISE and CAREER features. Along these lines, one wouldn’t expect PW or PL to matter too much, but this model indicates that it is one of the most important features.
Let’s also take a look at where the random forest failed. If you recall from the confusion matrix, there was one instance where a COY was classified as NOT COY.
The point is the 1976 COY who was categorized as not COY. This individual was coach Bill Fitch of the 1975-76 Cleveland Cavaliers. He had a modest win record of 49-33 during an overall down year where the top record was the 54-28 Lakers. Looking at the modern era where 60 win records and obscene statistics are put up on a regular basis, I would say that this is not a terrible error on our model's part.
The reason the model may have classified this as a NOT COY instance is due to the fact that the team's statistics aren't all that impressive, but impressive with respect to THAT year. This lack of incorporating other team performances during the year may be the biggest flaw in our model.

Predicting the next Coach of the Year

Unfortunately, we do not have all the statistics for the current year, but we will obtain what we can and modify the data as we did earlier.
Note that all our data is PER GAME, so for all of these statistics, we will just use the PER GAME statistics up to this point (1/21/20)
The only unrealistic statistic is, then, CURRENT SEASON statistics. We will assume CURRENT SEASON GAMES will be 82 for all coaches and obtain CURRENT SEASON WINS from 538's ELO projections on 1/21/20.
Once again, all other stats were acquired via the basketball_reference_scraper Python package.
Team Probability to win COY
MIL 0.49
TOR 0.46
LAC 0.36
BOS 0.31
HOU 0.23
LAL 0.22
DAL 0.22
MIA 0.17
DEN 0.16
IND 0.13
UTA 0.12
PHI 0.12
DET 0.09
NOP 0.07
WAS 0.05
SAS 0.04
ORL 0.04
CHI 0.04
BRK 0.04
POR 0.03
PHO 0.03
OKC 0.03
CHO 0.03
NYK 0.02
SAC 0.01
MIN 0.01
GSW 0.01
ATL 0.01
MEM 0.0
CLE 0.0
This shows the probability of each coach to win COY in the current season. Let's take a look at each of the candidates in order:
1) Milwaukee Bucks & Mike Budenholzer (49%)
Mike Budenholzer was the COY in the 2018-19 season and, objectively, the top candidate for COY this year as well. The Bucks are on a nearly 70-win pace which would automatically elevate him to the top spot.
However, the model is purely objective and fails to incorporate human elements such as the fact that individuals look at the Bucks skeptically as a 'regular season team'. Voters will likely avoid Budenholzer until there is more playoff success.
Moreover, Budenholzer won last year and voters almost never vote for the same candidate twice in a row. In fact, a repeat performance has never occurred in the COY award.
We see here the flaw in the model to not weight the human elements of recency bias against previous COY's and playoff success sufficiently.
2) Toronto Raptors & Nick Nurse (46%)
The Raptors are truly an incredible story this year. No one expected them to be this good. Even the ELO ratings put them at an expected 56 wins this season and be tied for the 3rd best record in the league behind the Lakers and Bucks.
The disparity between what people expected of the Raptors and what has actually transpired (despite injuries to significant players such as Lowry and Siakam) indicates that Nurse would be a viable candidate for COY.
3) Los Angeles Clippers & Doc Rivers (36%)
Despite the model favoring Doc Rivers, I believe it is unlikely that he wins COY due to the current stories circulating around the Clippers.
Everyone came into the season expecting the Clippers to blow everyone out of the water in the playoffs. No one expects the Clippers to exceed expectations during the regular season, especially with their superstars Kawhi Leonard and Paul George being the role models of load management.
4) Boston Celtics & Brad Stevens (31%)
Brad Stevens is another likely candidate for the COY. Not ony are the Celtics objectively impressive, but they also have the narrative on their side. After last year's disappointing performance, people questioned Stevens, but their newfound success without Kyrie Irving has pushed the blame onto Irving over Stevens. Moreover, significant strides have been made by their young players Jaylen Brown and Jayson Tatum vaulting them into Eastern Conference champion contention.
5) Los Angeles Lakers & Frank Vogel (22%)
Being in tune with the current basketball landscape through podcasts and articles, I can tell that Frank Vogel's campaign for the COY is quite strong. Over and over again we hear praises from players like Anthony Davis and Danny Green on the recent Lowe Post on how happy the Lakers are.
With the gaudy record, spotlight and percolating positive energy around the Lakers, Vogel is a very viable pick for the COY.
6) Dallas Mavericks & Rick Carlisle (22%)
Tied with Vogel is Rick Carlisle and the Dallas Mavericks. The Dallas Mavericks, along with the Raptors, are perhaps the most unexpected successful team this season. Looking at their roster, no one stands out except for Porzingis and Doncic, but they still tout a predicted record of 50-32.
Once again, the disparity between expectations and reality puts Carlisle in high contention of the COY.

Conclusion

Overall, I'm quite pleased with the Random Forest model's metrics. The predictions made by the model for the current 2019-20 appear on point as well. The model appears to favor the disparity between what people expected of teams and their performance on the court quite well. However, the flaw in the model is the lack of weighing recent events properly as we saw with coach Budenholzer.
Once again, predicting the COY is a challenging task and we cannot expect the model to be perfect. Yet, we can gain insight into what voters have valued in the past, allowing us to propose the most likely candidates quite accurately.
submitted by vagartha to nba [link] [comments]

[OC] Predicting the 2019-20 Coach of the Year

For those interested, this is part of a very long blog post here where I explain my entire thought process and methodology.
This post also contains a series of charts linked to here.

Introduction

Machine Learning models have been used to predict everything in basketball from the All Star Starters to James Harden’s next play. One model that has never been made is a successful Coach of the Year Predictor. The goal of this project is to create such a model.
Of course, creating such a model is challenging because, ultimately, the COY is awarded via voting and inherently adds a human element. As we will discover in this post, accounting for these human elements (e.g. recency bias, weighing storylines, climate around the team) makes it quite challenging. Having said this, I demonstrate how we can gain insight into what voters have valued in the past, allowing us to propose the most likely candidates quite accurately.

Methods

Data Aggregation

First, I created a database of all the coaches referred to in Basketball Reference's coachs index
Coach statistics were acquired from the following template url:
f'https://www.basketball- reference.com/leagues/NBA_{season_end_year}_coaches.html'
Team statistics were acquired from the following template url:
f'https://www.basketball- reference.com/teams/{team_abbreviation}/{season_end_year}.html'
I leveraged the new basketball-reference-scraper Python module to simplify the process.
After some data engineering that I describe completely in the post, I settled on the following features.
Non numerical data Coach Statistics Team Data
COACH SEASONS WITH FRANCHISE SEASON
TEAM SEASONS OVERALL FG
CURRENT SEASON GAMES FGA
CURRENT SEASON WINS FG%
FRANCHISE SEASON GAMES 3P
FRANCHISE SEASON WINS 3PA
CAREER SEASON GAMES 3P%
CAREER SEASON WINS 2P
FRANCHISE PLAYOFF GAMES 2PA
FRANCHISE PLAYOFF WINS 2P%
CAREER PLAYOFF GAMES FT
CAREER PLAYOFF WINS FTA
COY FT%
ORB
DRB
TRB
AST
STL
BLK
TOV
PF
PTS
OPP_G
OPP_FG
OPP_FGA
OPP_FG%
OPP_3P
OPP_3PA
OPP_3P%
OPP_2P
OPP_2PA
OPP_2P%
OPP_FT
OPP_FTA
OPP_FT%
OPP_ORB
OPP_DRB
OPP_TRB
OPP_AST
OPP_STL
OPP_BLK
OPP_TOV
OPP_PF
OPP_PTS
AGE
PW
PL
MOV
SOS
SRS
ORtg
DRtg
NRtg
PACE
FTr
TS%
eFG%
TOV%
ORB%
FT/FGA
OPP_eFG%
OPP_TOV%
DRB%
OPP_FT/FGA
For obtaining a full description of each statistic, please refer to Basketball Reference's glossary.

Data Exploration

First, I computed the correlation between the COY label and all the other features and sorted them. Here are some of the top statistics that correlate with the award along with their Pearson correlation coefficient. |Statistic|Pearson coefficient| |--|--| |CURRENT SEASON WINS|0.21764609944203592| |SRS|0.20748396385759718| |MOV|0.20740447792956693| |NRtg|0.20613382194841318| |PW|0.20282119218684597| |PL|-0.19850434198291064| |DRtg|-0.12967106743277185| |ORtg|0.11896730313375109|
As expected, the one of the most important features appears to be CURRENT SEASON WINS.
It is interesting the PW and PL correlate so much. This correlation indicates that not only does performance matter, but disparity between expected performance and reality matters significantly as well.
The weight put towards SRS, MOV, and NRtg also provide insight into how the COY is selected. Apparently, not only does it matter whether a team wins or not, but it also matters how they win. For example, the Bucks are typically winning games at an average of ~13 ppg this year, which would heavily favor them.
The high weight toward SRS(defined as a rating that takes into account average point differential and strength of schedule) indicates that it is even more important how a team performs against other challenging opponents. For example, no one should and does care about the Bucks crushing the Warriors, but they should care if they beat the Lakers.
Let's explore the CURRENT SEASON WINS statistic a little more using a box plot.
Box Plot
It appears coaches need to win ~50+ games for an 82 game season in order to be eligible. The exception being Mike Dunleavy’s minimum win season, there were only 50 games since it was a lockout season. Hence, that explains the outlier case.
Another interesting data point is the unfortunate coach who won the most games, but did not win the award. This turned out to be Phil Jackson, one year after his 72 win season in 1995-96 appeared to underperform by winning only 69 games. This, once again, indicates that the COY award takes into account historical performance. Who won instead? Pat Riley, with 61 wins.
Here are some histograms of the MOV and SRS where the blue plots indicate COY's and orange plots indicate NON-COY's.
As expected, COY’s are expected to dominate their teams and not just defeat them.

Oversampling

Before we begin, there is one key flaw in our dataset to look into. Namely, the two classes are not balanced at all.
Looking into the counts we have 1686 non-COY's and 43 COY's (as expected). This disparity can lead to a bad model, so how did I fix this?

SMOTE Oversampling

SMOTE (Synthetic Minority Over-sampling Technique) is a method of oversampling to even the distribution of the two classes. SMOTE takes a random sample from the minority class (COY=1 in our case) and computes it k-nearest neighbors. It chooses one of the neighbors and computes the vector between the sample and the neighbor. Next, it multiplies this vector by a random number between 0 and 1 and adds the vector to the original random sample to obtain a new data point.
See more details here.

Model Selection and Metrics

For this binary classification problem, we'll use 5 different models. Each model had its hyperparameters fine tuned using Grid Search Cross Validation to provide the best metrics. Here are all the models with a short description of each one: * Decision Tree Classifier - with Shannon's entropy as the criterion and a maximum depth of 37. * Random Forest Classifier - using the gini index as the criterion, maximum depth of 35 and maximum number of features of 5. * Logistic Classifier - using the simple ordinary least squares method * Support Vector Machine - with a linear kernel and C=1000 * Neural Network - a simple 6 layer network consisting of 80, 40, 20, 10, 5, 1 nodes, respectively (chosen to correspond with the number of features). I also used early stopping and 20% dropouts on each layer to prevent overfitting.
The metrics that will be used to evaluate our models are: NOTE that: TP=True Positives (Predicted COY and was a COY), TN=True Negatives (Predicted Not COY and was Not COY), FP=False Positives (Predicted COY and was Not COY), FN=False Negatives (Predicted Not COY and was COY)
  • Accuracy - % of correctly categorized instances ; Accuracy = (TP+TN)/(TP+TN+FP+FN)
  • Recall - Ability to categorize (+) class (COY) ; Recall = TP/(TP+FN)
  • Precision - How many of TP were correct ; Precision = TP/(TP+FP)
  • F1 - Balances Precision and Recall ; F1 = 2(Precision * Recall) / (Precision + Recall)

Results

Model Accuracy Recall Precision F1
Decision Tree 0.963 0.977 0.952 0.964
Random Forest 0.985 0.997 0.974 0.986
Logistic 0.920 0.980 0.870 0.922
SVC 0.959 0.991 0.932 0.960
Neural Network 0.898 1.0 0.833 0.909
In terms of all metrics the Random Forest outperforms all. Moreover, the Random Forest boasts an extremely high recall which is our most important metric. When predicting the Coach of the Year, we want to be able to predict the positive class best, which is indicated by a high recall.

Confusion Matrices

Confusion Matrices are another way of visualization our models' performances. Confusion Matrices are nxn matrices where the columns represent the actual class and the rows represent the class predicted by the model.
In the case of a binary classification problem, we obtain a 2x2 matrix with the true positives (bottom right), true negatives (top left), false positive (top right), and false negatives (bottom left).
Here are the confusion matrices for the Decision Tree, Random Forest, Logistic Classifier, SVC, and Neural Network.
Looking at the confusion matrices we can clearly see the disparity between the Random Forest Classifier and other classifiers. Evidently, the Random Forest Classifier is the best option.

Random Forest Evaluation

So what made the Random Forest so good? What features did it use that enabled it to make such accurate predictions?
I charted the feature importances of the Random Forest and plotted them in order here.
Here are some explicit numbers:
Feature % Contribution
CURRENT SEASON WINS 6.569329857214043
SRS 6.368785568654217
PW 6.059094690243399
NRtg 5.5519116066060175
MOV 4.473122672559081
PL 3.643349558354282
... ...
See more in my blog post.
I found it, once again, interesting that SRS such an important feature. It appears that the Random Forest took the correlation predicted earlier into account.
However, we see that other statistics matter significantly too, like CURRENT SEASON WINS, NRtg, and MOV as we predicted.
Something one wouldn’t anticipate is the contribution of factors outside of this season like FRANCHISE and CAREER features. Along these lines, one wouldn’t expect PW or PL to matter too much, but this model indicates that it is one of the most important features.
Let’s also take a look at where the random forest failed. If you recall from the confusion matrix, there was one instance where a COY was classified as NOT COY.
The point is the 1976 COY who was categorized as not COY. This individual was coach Bill Fitch of the 1975-76 Cleveland Cavaliers. He had a modest win record of 49-33 during an overall down year where the top record was the 54-28 Lakers. Looking at the modern era where 60 win records and obscene statistics are put up on a regular basis, I would say that this is not a terrible error on our model's part.
The reason the model may have classified this as a NOT COY instance is due to the fact that the team's statistics aren't all that impressive, but impressive with respect to THAT year. This lack of incorporating other team performances during the year may be the biggest flaw in our model.

Predicting the next Coach of the Year

Unfortunately, we do not have all the statistics for the current year, but we will obtain what we can and modify the data as we did earlier.
Note that all our data is PER GAME, so for all of these statistics, we will just use the PER GAME statistics up to this point (1/21/20)
The only unrealistic statistic is, then, CURRENT SEASON statistics. We will assume CURRENT SEASON GAMES will be 82 for all coaches and obtain CURRENT SEASON WINS from 538's ELO projections on 1/21/20.
Once again, all other stats were acquired via the basketball_reference_scraper Python package.
Team Probability to win COY
MIL 0.49
TOR 0.46
LAC 0.36
BOS 0.31
HOU 0.23
LAL 0.22
DAL 0.22
MIA 0.17
DEN 0.16
IND 0.13
UTA 0.12
PHI 0.12
DET 0.09
NOP 0.07
WAS 0.05
SAS 0.04
ORL 0.04
CHI 0.04
BRK 0.04
POR 0.03
PHO 0.03
OKC 0.03
CHO 0.03
NYK 0.02
SAC 0.01
MIN 0.01
GSW 0.01
ATL 0.01
MEM 0.0
CLE 0.0
This shows the probability of each coach to win COY in the current season. Let's take a look at each of the candidates in order:
1) Milwaukee Bucks & Mike Budenholzer (49%)
Mike Budenholzer was the COY in the 2018-19 season and, objectively, the top candidate for COY this year as well. The Bucks are on a nearly 70-win pace which would automatically elevate him to the top spot.
However, the model is purely objective and fails to incorporate human elements such as the fact that individuals look at the Bucks skeptically as a 'regular season team'. Voters will likely avoid Budenholzer until there is more playoff success.
Moreover, Budenholzer won last year and voters almost never vote for the same candidate twice in a row. In fact, a repeat performance has never occurred in the COY award.
We see here the flaw in the model to not weight the human elements of recency bias against previous COY's and playoff success sufficiently.
2) Toronto Raptors & Nick Nurse (46%)
The Raptors are truly an incredible story this year. No one expected them to be this good. Even the ELO ratings put them at an expected 56 wins this season and be tied for the 3rd best record in the league behind the Lakers and Bucks.
The disparity between what people expected of the Raptors and what has actually transpired (despite injuries to significant players such as Lowry and Siakam) indicates that Nurse would be a viable candidate for COY.
3) Los Angeles Clippers & Doc Rivers (36%)
Despite the model favoring Doc Rivers, I believe it is unlikely that he wins COY due to the current stories circulating around the Clippers.
Everyone came into the season expecting the Clippers to blow everyone out of the water in the playoffs. No one expects the Clippers to exceed expectations during the regular season, especially with their superstars Kawhi Leonard and Paul George being the role models of load management.
4) Boston Celtics & Brad Stevens (31%)
Brad Stevens is another likely candidate for the COY. Not ony are the Celtics objectively impressive, but they also have the narrative on their side. After last year's disappointing performance, people questioned Stevens, but their newfound success without Kyrie Irving has pushed the blame onto Irving over Stevens. Moreover, significant strides have been made by their young players Jaylen Brown and Jayson Tatum vaulting them into Eastern Conference champion contention.
5) Los Angeles Lakers & Frank Vogel (22%)
Being in tune with the current basketball landscape through podcasts and articles, I can tell that Frank Vogel's campaign for the COY is quite strong. Over and over again we hear praises from players like Anthony Davis and Danny Green on the recent Lowe Post on how happy the Lakers are.
With the gaudy record, spotlight and percolating positive energy around the Lakers, Vogel is a very viable pick for the COY.
6) Dallas Mavericks & Rick Carlisle (22%)
Tied with Vogel is Rick Carlisle and the Dallas Mavericks. The Dallas Mavericks, along with the Raptors, are perhaps the most unexpected successful team this season. Looking at their roster, no one stands out except for Porzingis and Doncic, but they still tout a predicted record of 50-32.
Once again, the disparity between expectations and reality puts Carlisle in high contention of the COY.

Conclusion

Overall, I'm quite pleased with the Random Forest model's metrics. The predictions made by the model for the current 2019-20 appear on point as well. The model appears to favor the disparity between what people expected of teams and their performance on the court quite well. However, the flaw in the model is the lack of weighing recent events properly as we saw with coach Budenholzer.
Once again, predicting the COY is a challenging task and we cannot expect the model to be perfect. Yet, we can gain insight into what voters have valued in the past, allowing us to propose the most likely candidates quite accurately.
submitted by vagartha to nbadiscussion [link] [comments]

New 5 Min Binary option strategy 2018.. 98% Win Rate Nadex 5 Minute Binary Strategy 2020 Best free binary options indicator for 5 minutes trading binary options-non repainting 5 minutes binary options strategy 90 - 95% Winning (100% profit guaranteed ) Binary Options- Simple and Easy 5 Minute Strategy

Pay $45 to bet gold will be above $1,700 at 1:30 p.m. today. Get $100 ($55 profit) if you win, lose $45 otherwise. Receive $81 now to bet that the NASDAQ 100 will go below 8600 points at 2 p.m. today. Jump best 5 minute binary options indicator to content melhores estratégias para opções binárias Sign In Create Account ; View New Content Discuss 5 Minute Binary Options Strategies NADEX indicator - 70% on avg - No repaint - 5min expiry Started by boatran8 , 03 Jul 2020 :. 5% a day is about 100% per month. Your capital may be at risk. I have been using this strategy for binary options on the Nadex platform for a while now and it has made me a lot of money. I use the 1 minute chart and place my trades during specific times. It has an 80% win rate. I am sure you can use it for other platforms but i only use it on Nadex. Jul 10, 2020 · This STT Indicator best ma crossover strategy for binary options 5 min will work the best on the smaller timeframe’s, such as the: 1 …. Installed the "SMA CrossOver_Justin_Alert" along with the existing. Currency pairs: major. #2. Binary options moving averages as the best strategy for newbies . There are a lot of ways to trade the 5 minute binary options expiry. This time frame is one of the most versatile in terms of the types of strategies you can use because it is inherently volatile yet at the same time can sustain a trend long enough to be useful to us binary options traders.

[index] [24395] [25352] [29698] [15207] [14219] [14407] [9482] [1928] [16842] [31203]

New 5 Min Binary option strategy 2018.. 98% Win Rate

Hi Guys, Live trading session, live demonstration, 98% win rate Live trading shown in this video. Please watch till the end. This is the Best 5 Minutes duration Strategy, best it can get. This is one of the best and most profitable binary options 1 minute time frame trading strategies that works great for both new traders and experienced traders to trade Nadex 5 minute binaries 2020. All about Trading in Forex and Binary Option Marked. EASY 5 min Scalping Systems - Top 5 Non Repaint Indicators ::::: Download Links From Fxprosystems.com ::::: 5- Super Point Signal – Russian ... AWESOME INDICATOR Best IQ Option Strategy 🏆 - Duration: 10:45. BinaryOptionsGuy.com 673,854 views. 10:45. Getting to 100 Nadex 5 minute Binary trades- 99 wins 2 losses - Duration: 16:09. 5 Minute Scalping System using 200 Moving Average and Stochastics ⛏️ - Duration: ... AWESOME INDICATOR Best IQ Option Strategy ... NADEX 5 Minute Binary Option Strategy ...

Flag Counter