Encode a Negative Binary - Online Binary Tools

Best Practices for A C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?

... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:
/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?

To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?

If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?

Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?

One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp [link] [comments]

C++ Best Practices For a C Programmer

Hi all,
Long time C programmer here, primarily working in the embedded industry (particularly involving safety-critical code). I've been a lurker on this sub for a while but I'm hoping to ask some questions regarding best practices. I've been trying to start using c++ on a lot of my work - particularly taking advantage of some of the code-reuse and power of C++ (particularly constexpr, some loose template programming, stronger type checking, RAII etc).
I would consider myself maybe an 8/10 C programmer but I would conservatively maybe rate myself as 3/10 in C++ (with 1/10 meaning the absolute minmum ability to write, google syntax errata, diagnose, and debug a program). Perhaps I should preface the post that I am more than aware that C is by no means a subset of C++ and there are many language constructs permitted in one that are not in the other.
In any case, I was hoping to get a few answers regarding best practices for c++. Keep in mind that the typical target device I work with does not have a heap of any sort and so a lot of the features that constitute "modern" C++ (non-initialization use of dynamic memory, STL meta-programming, hash-maps, lambdas (as I currently understand them) are a big no-no in terms of passing safety review.

When do I overload operators inside a class as opposed to outisde?


... And what are the arguments foagainst each paradigm? See below:
/* Overload example 1 (overloaded inside class) */ class myclass { private: unsigned int a; unsigned int b; public: myclass(void); unsigned int get_a(void) const; bool operator==(const myclass &rhs); }; bool myclass::operator==(const myclass &rhs) { if (this == &rhs) { return true; } else { if (this->a == rhs.a && this->b == rhs.b) { return true; } } return false; } 
As opposed to this:

/* Overload example 2 (overloaded outside of class) */ class CD { private: unsigned int c; unsigned int d; public: CD(unsigned int _c, unsigned int _d) : d(_d), c(_c) {}; /* CTOR */ unsigned int get_c(void) const; /* trival getters */ unsigned int get_d(void) const; /* trival getters */ }; /* In this implementation, If I don't make the getters (get_c, get_d) constant, * it won't compile despite their access specifiers being public. * * It seems like the const keyword in C++ really should be interpretted as * "read-only AND no side effects" rather than just read only as in C. * But my current understanding may just be flawed... * * My confusion is as follows: The function args are constant references * so why do I have to promise that the function methods have no side-effects on * the private object members? Is this something specific to the == operator? */ bool operator==(const CD & lhs, const CD & rhs) { if(&lhs == &rhs) return true; else if((lhs.get_c() == rhs.get_c()) && (lhs.get_d() == rhs.get_d())) return true; return false; } 
When should I use the example 1 style over the example 2 style? What are the pros and cons of 1 vs 2?

What's the deal with const member functions?

This is more of a subtle confusion but it seems like in C++ the const keyword means different things base on the context in which it is used. I'm trying to develop a relatively nuanced understanding of what's happening under the hood and I most certainly have misunderstood many language features, especially because C++ has likely changed greatly in the last ~6-8 years.

When should I use enum classes versus plain old enum?


To be honest I'm not entirely certain I fully understand the implications of using enum versus enum class in C++.
This is made more confusing by the fact that there are subtle differences between the way C and C++ treat or permit various language constructs (const, enum, typedef, struct, void*, pointer aliasing, type puning, tentative declarations).
In C, enums decay to integer values at compile time. But in C++, the way I currently understand it, enums are their own type. Thus, in C, the following code would be valid, but a C++ compiler would generate a warning (or an error, haven't actually tested it)
/* Example 3: (enums : Valid in C, invalid in C++ ) */ enum COLOR { RED, BLUE, GREY }; enum PET { CAT, DOG, FROG }; /* This is compatible with a C-style enum conception but not C++ */ enum SHAPE { BALL = RED, /* In C, these work because int = int is valid */ CUBE = DOG, }; 
If my understanding is indeed the case, do enums have an implicit namespace (language construct, not the C++ keyword) as in C? As an add-on to that, in C++, you can also declare enums as a sort of inherited type (below). What am I supposed to make of this? Should I just be using it to reduce code size when possible (similar to gcc option -fuse-packed-enums)? Since most processors are word based, would it be more performant to use the processor's word type than the syntax specified above?
/* Example 4: (Purely C++ style enums, use of enum class/ enum struct) */ /* C++ permits forward enum declaration with type specified */ enum FRUIT : int; enum VEGGIE : short; enum FRUIT /* As I understand it, these are ints */ { APPLE, ORANGE, }; enum VEGGIE /* As I understand it, these are shorts */ { CARROT, TURNIP, }; 
Complicating things even further, I've also seen the following syntax:
/* What the heck is an enum class anyway? When should I use them */ enum class THING { THING1, THING2, THING3 }; /* And if classes and structs are interchangable (minus assumptions * about default access specifiers), what does that mean for * the following definition? */ enum struct FOO /* Is this even valid syntax? */ { FOO1, FOO2, FOO3 }; 
Given that enumerated types greatly improve code readability, I've been trying to wrap my head around all this. When should I be using the various language constructs? Are there any pitfalls in a given method?

When to use POD structs (a-la C style) versus a class implementation?


If I had to take a stab at answering this question, my intuition would be to use POD structs for passing aggregate types (as in function arguments) and using classes for interface abstractions / object abstractions as in the example below:
struct aggregate { unsigned int related_stuff1; unsigned int related_stuff2; char name_of_the_related_stuff[20]; }; class abstraction { private: unsigned int private_member1; unsigned int private_member2; protected: unsigned int stuff_for_child_classes; public: /* big 3 */ abstraction(void); abstraction(const abstraction &other); ~abstraction(void); /* COPY semantic ( I have a better grasp on this abstraction than MOVE) */ abstraction &operator=(const abstraction &rhs); /* MOVE semantic (subtle semantics of which I don't full grasp yet) */ abstraction &operator=(abstraction &&rhs); /* * I've seen implentations of this that use a copy + swap design pattern * but that relies on std::move and I realllllly don't get what is * happening under the hood in std::move */ abstraction &operator=(abstraction rhs); void do_some_stuff(void); /* member function */ }; 
Is there an accepted best practice for thsi or is it entirely preference? Are there arguments for only using classes? What about vtables (where byte-wise alignment such as device register overlays and I have to guarantee placement of precise members)

Is there a best practice for integrating C code?


Typically (and up to this point), I've just done the following:
/* Example 5 : Linking a C library */ /* Disable name-mangling, and then give the C++ linker / * toolchain the compiled * binaries */ #ifdef __cplusplus extern "C" { #endif /* C linkage */ #include "device_driver_header_or_a_c_library.h" #ifdef __cplusplus } #endif /* C linkage */ /* C++ code goes here */ 
As far as I know, this is the only way to prevent the C++ compiler from generating different object symbols than those in the C header file. Again, this may just be ignorance of C++ standards on my part.

What is the proper way to selectively incorporate RTTI without code size bloat?

Is there even a way? I'm relatively fluent in CMake but I guess the underlying question is if binaries that incorporate RTTI are compatible with those that dont (and the pitfalls that may ensue when mixing the two).

What about compile time string formatting?


One of my biggest gripes about C (particularly regarding string manipulation) frequently (especially on embedded targets) variadic arguments get handled at runtime. This makes string manipulation via the C standard library (printf-style format strings) uncomputable at compile time in C.
This is sadly the case even when the ranges and values of paramers and formatting outputs is entirely known beforehand. C++ template programming seems to be a big thing in "modern" C++ and I've seen a few projects on this sub that use the turing-completeness of the template system to do some crazy things at compile time. Is there a way to bypass this ABI limitation using C++ features like constexpr, templates, and lambdas? My (somewhat pessimistic) suspicion is that since the generated assembly must be ABI-compliant this isn't possible. Is there a way around this? What about the std::format stuff I've been seeing on this sub periodically?

Is there a standard practice for namespaces and when to start incorporating them?

Is it from the start? Is it when the boundaries of a module become clearly defined? Or is it just personal preference / based on project scale and modularity?
If I had to make a guess it would be at the point that you get a "build group" for a project (group of source files that should be compiled together) as that would loosely define the boundaries of a series of abstractions APIs you may provide to other parts of a project.
--EDIT-- markdown formatting
submitted by aWildElectron to cpp_questions [link] [comments]

29 [TM4TM] American expat transmasc looking for (primarily) trans men / trans masc friends online, maybe more

Hey. I'm a 29 year old transmasc guy just beginning to pass socially abroad. Being the egg I am I look like a teenage boy though. Where I'm living now there's not much trans awareness so I have a limited capacity to make any trans friends, and I feel my dating pool is significantly reduced to non existent as a result. I've been single for the past 4 years and I'm wanting to connect. I'm the only trans person I know irl.
That being said I'd like to make some trans friends. All trans identies are welcome though I'm particularly interested in trans men or trans masc identies to bond with over our unique situations. Hormones are available in the country in living in but trans rights are not very good, trans people are forced to be sterilized through invasive surgery if they choose to transition, they cannot control their hormone intake (guys receive T shots from medical staff at a dose the medical staff decide). Non binary identies are virtually non existent here so I do not intend to transition here. Instead I'll cope and compartmentalize and ignore my dysphoria while trying to enjoy living out my childhood dream of living abroad. Minus not being able to transition, quality of life here is pretty good.
Sexually I'm attracted to cis men, trans men, and trans masc people primarily but I experience intense sexual attraction for women as well. Lately I've been primarily attracted to trans men.
Whenever it seems safer and more politically sound to return to America I will go back, at least that's the plan for now. Beyond the trans stuff, a little about me.
Teaching abroad and working with a variety of students has awakened a serious interest in me in psychology and sociology. I'm interested in eventually returning to the US (either Cali or WA) to pursue a graduate degree in psychology counseling. I aim to help anyone and everyone I can from children to elderly, LGBTQ people and people of color.
I'm all kinds of nerd.
I have a wide variety of interests, primarily art/ crafting/music (drawing, painting, making silicone charms, hema beads 8 bit art, I'm interested in photography and videography, taking up instruments such as the ukulele, kalimba, piano, and lately thinking about the saxophone), I'm a big kpop and kdrama fan, I occasionally watch anime, I recently started DnD for the first time, and I play video games occasionally (stuff like Pokemon and ACNH mostly but some of my favorite games are Zelda Breath of the Wild, the Fallout series, Elder Scrolls, the Final Fantasy series, and my ultimate favorite the Bioshock series.)
I study Japanese and Korean. I'm also mildly interested in Italian and Chinese.
I'd like to travel around Europe someday, especially Italy.
I heavily suspect I have inattentive ADHD but treatment options seem harder to come by here due to the heavy stigma of mental illness.
Thank you for reading my wall of text.
submitted by someinspiringquote to t4t [link] [comments]

Which AWS Lambda programming language should you use?

Original article here: https://dashbird.io/blog/most-effictient-lambda-language/

To me personally, when I think programming languages I think JavaScript and while 67% of the developers out there might think the same (at first) that does not imply it’s the most efficient language to use with AWS Lambda.
This article will be a two-parter. I’m going to explore the pros and cons of the most popular programming languages with Lambda and the second one will contain benchmarks of said languages on Lambda. Hopefully, this will end up shedding some light on this particular subject.
So without further ado, here we go, with great biased and no benchmark to back my claims off(but do check back the blog soon and we’ll have those benchmarks ready for you).

1. Java

Java has been in service for decades and is, to this day, a reliable option when choosing the backbone of your stack. With AWS, Lambda is no different as it makes a strong candidate for your functions.

Java applications in AWS Lambda have the following merits.

Reliable and well-tested libraries. The libraries will make life easy for you through enhanced testability(test-ability?) and maintainability of AWS lambda tasks.
Predictive performance. While Java has slower spin uptime, you can easily predict the memory needs of your functions and to counteract those dreaded colds starts you can just up your memory allocation.
Tooling Support. Java has a wide range of tooling support which includes Eclipse, IntelliJ IDEA, Maven, and Gradle among others.
If you’re wondering how Java remains an efficient AWS lambda language, here is the answer. Java has unique characteristics like multi-thread concurrency, platform independence, security, and object-oriented.

2. Node.js

I’m definitely biased but Node.js is probably the best one in this list. I know it has it’s minuses but the overwhelming support that node had in the past years has to have its merits.

Why Node.js?

Modules. As of now, there are 1735 plugins on npm tagged “AWS-lambda” which help developers with their applications in a lot of different ways from running Lambda locally to keeping vital functions warm to avoid clod-starts.
Spinup times. Node.js has better spin-up times than C# or Java which make it a better option for client-facing applications that risk suffering from uneven traffic distribution.
Community. I’d be remiss if I wasn’t mentioning this as one of the major draw-ins of node is its community support on which you can always rely to find a solution to your problem.

3. Python

Python applications are everywhere. From GUI based desktops, web frameworks, operating systems, and enterprise applications. In the past few years, we’ve seen a lot of developers adopting python and it seems like this trend is not stopping.

The benefits of Python in AWS Lambda environments.

Unbelievable spin-up times. Python is without a doubt the absolute winner when it comes to spinning up containers. It’s about 100 times faster than Java or C#.
Third party modules. Like npm, python has a wide variety of modules available. The feature helps ease interaction with other languages and platforms.
Easy to learn and community support If you are a beginner, programming languages can scare you. However, Python has extensive readability and a supportive community to help in its application. The Pythonistas have uploaded more than 145,000 support packages to help users.
Simplicity. With Python you can void the overcomplicated architecture.

4. Go

Introduction of GO language was a significant move forward for AWS Lambda. Although Go has its share of problems, it’s suitable for a serverless environment and the merits of Go are not to be ignored.

So, what is so outstanding about Go?

Go has a remarkable tenacity of 1.x. Unlike other languages like Java, C++, and others, Go has the highest tenacity. Such tenacity rate is a promise of a correct compilation of programs without constant alterations.
Go uses statistic binaries. It implies that the need for statistic linking is no more. Besides programming, AWS Lambda programs with Go would help enjoy forward compatibility.
Go offers stability It’s unique tooling, language design, and ecosystem makes the programming language shine.
Goroutines. Goroutines are a way of writing code that can run concurrently, whilst letting Go handle how many threads should actually be running at once which work amazingly in AWS Lambda.

5. Net.Core Language

Net.Core language popularity in programming stands out and it’s a welcomed addition to people already relying on AWS for running their .net applications.
NuGet Support. Just like all the other languages supported on Lambda, Net.core gets module support via NuGet which makes life for developers a lot easier.
Consistent performance. Net.Core has a more consistent performance result than Node.js or Python as a result of it’s less dynamic nature.
Faster execution Compared to Go, Net.Core has a faster execution time which is not something to be ignored.

6. Ruby

If you’re an AWS customer, then Ruby is familiar to you. Ruby programming language stands out as it reduces complexities for AWS lambda users.

So, what are the benefits of Ruby in AWS lambda?

Third party module support. The language has unique modules that allow the addition of new elements of class hierarchy at runtime. Strong and supportive community. It thus makes it simple to use.
Clean Code. Its clean code improves AWS Lambda performance.
Ruby is a relatively new addition to the AWS Lambda roaster but there is a lot of interest around it already. I look forward to seeing how far we can push Ruby using AWS Lambda.

Conclusion

At first glance performance in a controlled, similar environment, running the same kind of functions isn’t all that different and until you get these to production you won’t be able to get a definitive conclusion. Stay tuned for the follow up for this article which will contain an updated benchmark of all the Languages supported by AWS Lambda in 2019.
submitted by Dashbird to serverless [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 1:

[Edit: I didn't mean to put a colon in the title.]
In this post we'll be looking at some of the things that make LISP 1.5 and Common Lisp different. There isn't too much surviving LISP 1.5 code, but some of the code that is still around is interesting and worthy of study.
Here are some conventions used in this post of which you might take notice:
Sources are linked sometimes below, but here is a list of links that were helpful while writing this:
The differences between LISP 1.5 and Common Lisp can be classified into the following groups:
  1. Superficial differences—matters of syntax
  2. Conventional differences—matters of code style and form
  3. Fundamental differences—matters of semantics
  4. Library differences—matters of available functions
This post will go through the first three of these groups in that order. A future post will discuss library differences, except for some functions dealing with character-based input and output, since they are a little world unto their own.
[Originally the library differences were part of this post, but it exceeded the length limit on posts (40000 characters)].

Superficial differences.

LISP 1.5 was used initially on computers that had very limited character sets. The machine on which it ran at MIT, the IBM 7090, used a six-bit, binary-coded decimal encoding for characters, which could theoretically represent up to sixty-four characters. In practice, only fourty-six were widely used. The repertoire of this character set consisted of the twenty-six uppercase letters, the nine digits, the blank character '', and the ten special characters '-', '/', '=', '.', '$', ',', '(', ')', '*', and '+'. You might note the absence of the apostrophe/single quote—there was no shorthand for the quote operator in LISP 1.5 because no sensical character was available.
When the LISP 1.5 system read input from cards, it treated the end of a card not like a blank character (as is done in C, TeX, etc.), but as nothing. Therefore the first character of a symbol's name could be the last character of a card, the remaining characters appearing at the beginning of the next card. Lisp's syntax allowed for the omission of almost all whitespace besides that which was used as delimiters to separate tokens.
List syntax. Lists were contained within parentheses, as is the case in Common Lisp. From the beginning Lisp had the consing dot, which was written as a period in LISP 1.5; the interaction between the period when used as the consing dot and the period when used as the decimal point will be described shortly.
In LISP 1.5, the comma was equivalent to a blank character; both could be used to delimit items within a list. The LISP I Programmer's Manual, p. 24, tells us that
The commas in writing S-expressions may be omitted. This is an accident.
Number syntax. Numbers took one of three forms: fixed-point integers, floating-point numbers, and octal numbers. (Of course octal numbers were just an alternative notation for the fixed-point integers.)
Fixed-point integers were written simply as the decimal representation of the integers, with an optional sign. It isn't explicitly mentioned whether a plus sign is allowed in this case or if only a minus sign is, but floating-point syntax does allow an initial plus sign, so it makes sense that the fixed-point number syntax would as well.
Floating-point numbers had the syntax described by the following context-free grammar, where a term in square brackets indicates that the term is optional:
float: [sign] integer '.' [integer] exponent [sign] integer '.' integer [exponent] exponent: 'E' [sign] digit [digit] integer: digit integer digit digit: one of '0' '1' '2' '3' '4' '5' '6' '7' '8' '9' sign: one of '+' '-' 
This grammar generates things like 100.3 and 1.E5 but not things like .01 or 14E2 or 100.. The manual seems to imply that if you wrote, say, (100. 200), the period would be treated as a consing dot [the result being (cons 100 200)].
Floating-point numbers are limited in absolute value to the interval (2-128, 2128), and eight digits are significant.
Octal numbers are defined by the following grammar:
octal: [sign] octal-digits 'Q' [integer] octal-digits: octal-digit [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] [octal-digit] octal-digit: one of '0' '1' '2' '3' '4' '5' '6' '7' 
The optional integer following 'Q' is a scale factor, which is a decimal integer representing an exponent with a base of 8. Positive octal numbers behave as one would expect: The value is shifted to the left 3×s bits, where s is the scale factor. Octal was useful on the IBM 7090, since it used thirty-six-bit words; twelve octal digits (which is the maximum allowed in an octal number in LISP 1.5) thus represent a single word in a convenient way that is more compact than binary (but still being easily convertable to and from binary). If the number has a negative sign, then the thirty-sixth bit is logically ored with 1.
The syntax of Common Lisp's numbers is a superset of that of LISP 1.5. The only major difference is in the notation of octal numbers; Common Lisp uses the sharpsign reader macro for that purpose. Because of the somewhat odd semantics of the minus sign in octal numbers in LISP 1.5, it is not necessarily trivial to convert a LISP 1.5 octal number into a Common Lisp expression resulting in the same value.
Symbol syntax. Symbol names can be up to thirty characters in length. While the actual name of a symbol was kept on its property list under the pname indicator and could be any sequence of thirty characters, the syntax accepted by the read program for symbols was limited in a few ways. First, it must not begin with a digit or with either of the characters '+' or '-', and the first two characters cannot be '$'. Otherwise, all the alphanumeric characters, along with the special characters '+', '-', '=', '*', '/', and '$'. The fact that a symbol can't begin with a sign character or a digit has to do with the number syntax; the fact that a symbol can't begin with '$$' has to do with the mechanism by which the LISP 1.5 reader allowed you to write characters that are usually not allowed in symbols, which is described next.
Two dollar signs initiated the reading of what we today might call an "escape sequence". An escape sequence had the form "$$xSx", where x was any character and S was a sequence of up to thirty characters not including x. For example, $$x()x would get the symbol whose name is '()' and would print as '()'. Thus it is similar in purpose to Common Lisp's | syntax. There is a significant difference: It could not be embedded within a symbol, unlike Common Lisp's |. In this respect it is closer to Maclisp's | reader macro (which created a single token) than it is to Common Lisp's multiple escape character. In LISP 1.5, "A$$X()X$" would be read as (1) the symbol A$$X, (2) the empty list, (3) the symbol X.
The following code sets up a $ reader macro so that symbols using the $$ notation will be read in properly, while leaving things like $eof$ alone.
(defun dollar-sign-reader (stream character) (declare (ignore character)) (let ((next (read-char stream t nil t))) (cond ((char= next #\$) (let ((terminator (read-char stream t nil t))) (values (intern (with-output-to-string (name) (loop for c := (read-char stream t nil t) until (char= c terminator) do (write-char c name))))))) (t (unread-char next stream) (with-standard-io-syntax (read (make-concatenated-stream (make-string-input-stream "$") stream) t nil t)))))) (set-macro-character #\$ #'dollar-sign-reader t) 

Conventional differences.

LISP 1.5 is an old programming language. Generally, compared to its contemporaries (such as FORTRANs I–IV), it holds up well to modern standards, but sometimes its age does show. And there were some aspects of LISP 1.5 that might be surprising to programmers familiar only with Common Lisp or a Scheme.
M-expressions. John McCarthy's original concept of Lisp was a language with a syntax like this (from the LISP 1.5 Programmer's Manual, p. 11):
equal[x;y]=[atom[x]→[atom[y]→eq[x;y]; T→F]; equal[car[x];car[Y]]→equal[cdr[x];cdr[y]]; T→F] 
There are several things to note. First is the entirely different phrase structure. It's is an infix language looking much closer to mathematics than the Lisp we know and love. Square brackets are used instead of parentheses, and semicolons are used instead of commas (or blanks). When square brackets do not enclose function arguments (or parameters when to the left of the equals sign), they set up a conditional expression; the arrows separate predicate expressions and consequent expressions.
If that was Lisp, then where do s-expressions come in? Answer: quoting. In the m-expression notation, uppercase strings of characters represent quoted symbols, and parenthesized lists represent quoted lists. Here is an example from page 13 of the manual:
λ[[x;y];cons[car[x];y]][(A B);(C D)] 
As an s-expressions, this would be
((lambda (x y) (cons (car x) y)) '(A B) '(C D)) 
The majority of the code in the manual is presented in m-expression form.
So why did s-expressions stick? There are a number of reasons. The earliest Lisp interpreter was a translation of the program for eval in McCarthy's paper introducing Lisp, which interpreted quoted data; therefore it read code in the form of s-expressions. S-expressions are much easier for a computer to parse than m-expressions, and also more consistent. (Also, the character set mentioned above includes neither square brackets nor a semicolon, let alone a lambda character.) But in publications m-expressions were seen frequently; perhaps the syntax was seen as a kind of "Lisp pseudocode".
Comments. LISP 1.5 had no built-in commenting mechanism. It's easy enough to define a comment operator in the language, but it seemed like nobody felt a need for them.
Interestingly, FORTRAN I had comments. Assembly languages of the time sort of had comments, in that they had a portion of each line/card that was ignored in which you could put any text. FORTRAN was ahead of its time.
(Historical note: The semicolon comment used in Common Lisp comes from Maclisp. Maclisp likely got it from PDP-10 assembly language, which let a semicolon and/or a line break terminate a statement; thus anything following a semicolon is ignored. The convention of octal numbers by default, decimal numbers being indicated by a trailing decimal point, of Maclisp too comes from the assembly language.)
Code formatting. The code in the manual that isn't written using m-expression syntax is generally lacking in meaningful indentation and spacing. Here is an example (p. 49):
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Nowadays we might indent it like so:
(TH1 (LAMBDA (A1 A2 A C) (COND ((NULL A) (TH2 A1 A2 NIL NIL C)) (T (OR (MEMBER (CAR A) C) (COND ((ATOM (CAR A)) (TH1 (COND ((MEMBER (CAR A) A1) A1) (T (CONS (CAR A) A1))) A2 (CDR A) C)) (T (TH1 A1 (COND ((MEMBER (CAR A) A2) A2) (T (CONS (CAR A) A2))) (CDR A) C)))))))) 
Part of the lack of formatting stems probably from the primarily punched-card-based programming world of the time; you would see the indented structure only by printing a listing of your code, so there is no need to format the punched cards carefully. LISP 1.5 allowed a very free format, especially when compared to FORTRAN; the consequence is that early LISP 1.5 programs are very difficult to read because of the lack of spacing, while old FORTRAN programs are limited at least to one statement per line.
The close relationship of Lisp and pretty-printing originates in programs developed to produce nicely formatted listings of Lisp code.
Lisp code from the mid-sixties used some peculiar formatting conventions that seem odd today. Here is a quote from Steele and Gabriel's Evolution of Lisp:
This intermediate example is derived from a 1966 coding style:
DEFINE(( (MEMBER (LAMBDA (A X) (COND ((NULL X) F) ((EQ A (CAR X) ) T) (T (MEMBER A (CDR X))) ))) )) 
The design of this style appears to take the name of the function, the arguments, and the very beginning of the COND as an idiom, and hence they are on the same line together. The branches of the COND clause line up, which shows the structure of the cases considered.
This kind of indentation is somewhat reminiscent of the formatting of Algol programs in publications.
Programming style. Old LISP 1.5 programs can seem somewhat primitive. There is heavy use of the prog feature, which is related partially to the programming style that was common at the time and partially to the lack of control structures in LISP 1.5. You could express iteration only by using recursion or by using prog+go; there wasn't a built-in looping facility. There is a library function called for that is something like the early form of Maclisp's do (the later form would be inherited in Common Lisp), but no surviving LISP 1.5 code uses it. [I'm thinking of making another post about converting programs using prog to the more structured forms that Common Lisp supports, if doing so would make the logic of the program clearer. Naturally there is a lot of literature on so called "goto elimination" and doing it automatically, so it would not present any new knowledge, but it would have lots of Lisp examples.]
LISP 1.5 did not have a let construct. You would use either a prog and setq or a lambda:
(let ((x y)) ...) 
is equivalent to
((lambda (x) ...) y) 
Something that stands out immediately when reading LISP 1.5 code is the heavy, heavy use of combinations of car and cdr. This might help (though car and cdr should be left alone when they are used with dotted pairs):
(car x) = (first x) (cdr x) = (rest x) (caar x) = (first (first x)) (cadr x) = (second x) (cdar x) = (rest (first x)) (cddr x) = (rest (rest x)) (caaar x) = (first (first (first x))) (caadr x) = (first (second x)) (cadar x) = (second (first x)) (caddr x) = (third x) (cdaar x) = (rest (first (first x))) (cdadr x) = (rest (second x)) (cddar x) = (rest (rest (first x))) (cdddr x) = (rest (rest (rest x))) 
Here are some higher compositions, even though LISP 1.5 doesn't have them.
(caaaar x) = (first (first (first (first x)))) (caaadr x) = (first (first (second x))) (caadar x) = (first (second (first x))) (caaddr x) = (first (third x)) (cadaar x) = (second (first (first x))) (cadadr x) = (second (second x)) (caddar x) = (third (first x)) (cadddr x) = (fourth x) (cdaaar x) = (rest (first (first (first x)))) (cdaadr x) = (rest (first (second x))) (cdadar x) = (rest (second (first x))) (cdaddr x) = (rest (third x)) (cddaar x) = (rest (rest (first (first x)))) (cddadr x) = (rest (rest (second x))) (cdddar x) = (rest (rest (rest (first x)))) (cddddr x) = (rest (rest (rest (rest x)))) 
Things like defstruct and Flavors were many years away. For a long time, Lisp dialects had lists as the only kind of structured data, and programmers rarely defined functions with meaningful names to access components of data structures that are represented as lists. Part of understanding old Lisp code is figuring out how data structures are built up and what their components signify.
In LISP 1.5, it's fairly common to see nil used where today we'd use (). For example:
(LAMBDA NIL ...) 
instead of
(LAMBDA () ...) 
or (PROG NIL ...)
instead of
(PROG () ...) 
Actually this practice was used in other Lisp dialects as well, although it isn't really seen in newer code.
Identifiers. If you examine the list of all the symbols described in the LISP 1.5 Programmer's Manual, you will notice that none of them differ only in the characters after the sixth character. In other words, it is as if symbol names have only six significant characters, so that abcdef1 and abcdef2 would be considered equal. But it doesn't seem like that was actually the case, since there is no mention of such a limitation in the manual. Another thing of note is that many symbols are six characters or fewer in length.
(A sequence of six characters is nice to store on the hardware on which LISP 1.5 was running. The processor used thirty-six-bit words, and characters were six-bit; therefore six characters fit in a single word. It is conceivable that it might be more efficient to search for names that take only a single word to store than for names that take more than one word to store, but I don't know enough about the computer or implementation of LISP 1.5 to know if that's true.)
Even though the limit on names was thirty characters (the longest symbol names in standard Common Lisp are update-instance-for-different-class and update-instance-for-redefined-class, both thirty-five characters in length), only a few of the LISP 1.5 names are not abbreviated. Things like terpri ("terminate print") and even car and cdr ("contents of adress part of register" and "contents of decrement part of register"), which have stuck around until today, are pretty inscrutable if you don't know what they mean.
Thankfully the modern style is to limit abbreviations. Comparing the names that were introduced in Common Lisp versus those that have survived from LISP 1.5 (see the "Library" section below) shows a clear preference for good naming in Common Lisp, even at the risk of lengthy names. The multiple-value-bind operator could easily have been named mv-bind, but it wasn't.

Fundamental differences.

Truth values. Common Lisp has a single value considered to be false, which happens to be the same as the empty list. It can be represented either by the symbol nil or by (); either of these may be quoted with no difference in meaning. Anything else, when considered as a boolean, is true; however, there is a self-evaluating symbol, t, that traditionally is used as the truth value whenever there is no other more appropriate one to use.
In LISP 1.5, the situation was similar: Just like Common Lisp, nil or the empty list are false and everything else is true. But the symbol nil was used by programmers only as the empty list; another symbol, f, was used as the boolean false. It turns out that f is actually a constant whose value is nil. LISP 1.5 had a truth symbol t, like Common Lisp, but it wasn't self-evaluating. Instead, it was a constant whose permanent value was *t*, which was self-evaluating. The following code will set things up so that the LISP 1.5 constants work properly:
(defconstant *t* t) ; (eq *t* t) is true (defconstant f nil) 
Recall the practice in older Lisp code that was mentioned above of using nil in forms like (lambda nil ...) and (prog nil ...), where today we would probably use (). Perhaps this usage is related to the fact that nil represented an empty list more than it did a false value; or perhaps the fact that it seems so odd to us now is related to the fact that there is even less of a distinction between nil the empty list and nil the false value in Common Lisp (there is no separate f constant).
Function storage. In Common Lisp, when you define a function with defun, that definition gets stored somehow in the global environment. LISP 1.5 stores functions in a much simpler way: A function definition goes on the property list of the symbol naming it. The indicator under which the definition is stored is either expr or fexpr or subr or fsubr. The expr/fexpr indicators were used when the function was interpreted (written in Lisp); the subr/fsubr indicators were used when the function was compiled (or written in machine code). Functions can be referred to based on the property under which their definitions are stored; for example, if a function named f has a definition written in Lisp, we might say that "f is an expr."
When a function is interpreted, its lambda expression is what is stored. When a function is compiled or machine coded, a pointer to its address in memory is what is stored.
The choice between expr and fexpr and between subr and fsubr is based on evaluation. Functions that are exprs and subrs are evaluated normally; for example, an expr is effectively replaced by its lambda expression. But when an fexpr or an fsubr is to be processed, the arguments are not evaluated. Instead they are put in a list. The fexpr or fsubr definition is then passed that list and the current environment. The reason for the latter is so that the arguments can be selectively evaluated using eval (which took a second argument containing the environment in which evaluation is to occur). Here is an example of what the definition of an fexpr might look like, LISP 1.5 style. This function takes any number of arguments and prints them all, returning nil.
(LAMBDA (A E) (PROG () LOOP (PRINT (EVAL (CAR A) E)) (COND ((NULL (CDR A)) (RETURN NIL))) (SETQ A (CDR A)) (GO LOOP))) 
The "f" in "fexpr" and "fsubr" seems to stand for "form", since fexpr and fsubr functions got passed a whole form.
The top level: evalquote. In Common Lisp, the interpreter is usually available interactively in the form of a "Read-Evaluate-Print-Loop", for which a common abbreviation is "REPL". Its structure is exactly as you would expect from that name: Repeatedly read a form, evaluate it (using eval), and print the results. Note that this model is the same as top level file processing, except that the results of only the last form are printed, when it's done.
In LISP 1.5, the top level is not eval, but evalquote. Here is how you could implement evalquote in Common Lisp:
(defun evalquote (operator arguments) (eval (cons operator arguments))) 
LISP 1.5 programs commonly look like this (define takes a list of function definitions):
DEFINE (( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
which evalquote would process as though it had been written
(DEFINE ( (FUNCTION1 (LAMBDA () ...)) (FUNCTION2 (LAMBDA () ...)) ... )) 
Evaluation, scope, extent. Before further discussion, here the evaluator for LISP 1.5 as presented in Appendix B, translated from m-expressions to approximate Common Lisp syntax. This code won't run as it is, but it should give you an idea of how the LISP 1.5 interpreter worked.
(defun evalquote (function arguments) (if (atom function) (if (or (get function 'fexpr) (get function 'fsubr)) (eval (cons function arguments) nil)) (apply function arguments nil))) (defun apply (function arguments environment) (cond ((null function) nil) ((atom function) (let ((expr (get function 'expr)) (subr (get function 'subr))) (cond (expr (apply expr arguments environment)) (subr ; see below ) (t (apply (cdr (sassoc function environment (lambda () (error "A2")))) arguments environment))))) ((eq (car function 'label)) (apply (caddr function) arguments (cons (cons (cadr function) (caddr function)) arguments))) ((eq (car function) 'funarg) (apply (cadr function) arguments (caddr function))) ((eq (car function) 'lambda) (eval (caddr function) (nconc (pair (cadr function) arguments) environment))) (t (apply (eval function environment) arguments environment)))) (defun eval (form environment) (cond ((null form) nil) ((numberp form) form) ((atom form) (let ((apval (get atom 'apval))) (if apval (car apval) (cdr (sassoc form environment (lambda () (error "A8"))))))) ((eq (car form) 'quote) (cadr form)) ((eq (car form) 'function) (list 'funarg (cadr form) environment)) ((eq (car form) 'cond) (evcon (cdr form) environment)) ((atom (car form)) (let ((expr (get (car form) 'expr)) (fexpr (get (car form) 'fexpr)) (subr (get (car form) 'subr)) (fsubr (get (car form) 'fsubr))) (cond (expr (apply expr (evlis (cdr form) environment) environment)) (fexpr (apply fexpr (list (cdr form) environment) environment)) (subr ; see below ) (fsubr ; see below ) (t (eval (cons (cdr (sassoc (car form) environment (lambda () (error "A9")))) (cdr form)) environment))))) (t (apply (car form) (evlis (cdr form) environment) environment)))) (defun evcon (cond environment) (cond ((null cond) (error "A3")) ((eval (caar cond) environment) (eval (cadar cond) environment)) (t (evcon (cdr cond) environment)))) (defun evlis (list environment) (maplist (lambda (j) (eval (car j) environment)) list)) 
(The definition of evalquote earlier was a simplification to avoid the special case of special operators in it. LISP 1.5's apply can't handle special operators (which is also true of Common Lisp's apply). Hopefully the little white lie can be forgiven.)
There are several things to note about these definitions. First, it should be reiterated that they will not run in Common Lisp, for many reasons. Second, in evcon an error has been corrected; the original says in the consequent of the second branch (effectively)
(eval (cadar environment) environment) 
Now to address the "see below" comments. In the manual it describes the actions of the interpreter as calling a function called spread, which takes the arguments given in a Lisp function call and puts them into the machine registers expected with LISP 1.5's calling convention, and then executes an unconditional branch instruction after updating the value of a variable called $ALIST to the environment passed to eval or to apply. In the case of fsubr, instead of calling spread, since the function will always get two arguments, it places them directly in the registers.
You will note that apply is considered to be a part of the evaluator, while in Common Lisp apply and eval are quite different. Here it takes an environment as its final argument, just like eval. This fact highlights an incredibly important difference between LISP 1.5 and Common Lisp: When a function is executed in LISP 1.5, it is run in the environment of the function calling it. In contrast, Common Lisp creates a new lexical environment whenever a function is called. To exemplify the differences, the following code, if Common Lisp were evaluated like LISP 1.5, would be valid:
(defun weird (a b) (other-weird 5)) (defun other-weird (n) (+ a b n)) 
In Common Lisp, the function weird creates a lexical environment with two variables (the parameters a and b), which have lexical scope and indefinite extent. Since the body of other-weird is not lexically within the form that binds a and b, trying to make reference to those variables is incorrect. You can thwart Common Lisp's lexical scoping by declaring those variables to have indefinite scope:
(defun weird (a b) (declare (special a b)) (other-weird 5)) (defun other-weird (n) (declare (special a b)) (+ a b n)) 
The special declaration tells the implementation that the variables a and b are to have indefinite scope and dynamic extent.
Let's talk now about the funarg branch of apply. The function/funarg device was introduced some time in the sixties in an attempt to solve the scoping problem exemplified by the following problematic definition (using Common Lisp syntax):
(defun testr (x p f u) (cond ((funcall p x) (funcall f x)) ((atom x) (funcall u)) (t (testr (cdr x) p f (lambda () (testr (car x) p f u)))))) 
This function is taken from page 11 of John McCarthy's History of Lisp.
The only problematic part is the (car x) in the lambda in the final branch. The LISP 1.5 evaluator does little more than textual substitution when applying functions; therefore (car x) will refer to whatever x is currently bound whenever the function (lambda expression) is applied, not when it is written.
How do you fix this issue? The solution employed in LISP 1.5 was to capture the environment present when the function expression is written, using the function operator. When the evaluator encounters a form that looks like (function f), it converts it into (funarg f environment), where environment is the current environment during that call to eval. Then when apply gets a funarg form, it applies the function in the environment stored in the funarg form instead of the environment passed to apply.
Something interesting arises as a consequence of how the evaluator works. Common Lisp, as is well known, has two separate name spaces for functions and for variables. If a Common Lisp implementation encounters
(lambda (f x) (f x)) 
the result is not a function applying one of its arguments to its other argument, but rather a function applying a function named f to its second argument. You have to use an operator like funcall or apply to use the functional value of the f parameter. If there is no function named f, then you will get an error. In contrast, LISP 1.5 will eventually find the parameter f and apply its functional value, if there isn't a function named f—but it will check for a function definition first. If a Lisp dialect that has a single name space is called a "Lisp-1", and one that has two name spaces is called a "Lisp-2", then I guess you could call LISP 1.5 a "Lisp-1.5"!
How can we deal with indefinite scope when trying to get LISP 1.5 programs to run in Common Lisp? Well, with any luck it won't matter; ideally the program does not have any references to variables that would be out of scope in Common Lisp. However, if there are such references, there is a fairly simple fix: Add special declarations everywhere. For example, say that we have the following (contrived) program, in which define has been translated into defun forms to make it simpler to deal with:
(defun f (x) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (h (* b a))) (defun h (i) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
The result of calling p should be 10/63. To make it work, add special declarations wherever necessary:
(defun f (x) (declare (special a b)) (prog (m) (setq m a) (setq a 7) (return (+ m b x)))) (defun g (l) (declare (special a b l)) (h (* b a))) (defun h (i) (declare (special a b l i)) (/ l (f (setq b (setq a i))))) (defun p () (prog (a b i) (declare (special a b i)) (setq a 4) (setq b 6) (setq i 3) (return (g (f 10))))) 
Be careful about the placement of the declarations. It is required that the one in p be inside the prog, since that is where the variables are bound; putting it at the beginning (i.e., before the prog) would do nothing because the prog would create new lexical bindings.
This method is not optimal, since it really doesn't help too much with understanding how the code works (although being able to see which variables are free and which are bound, by looking at the declarations, is very helpful). A better way would be to factor out the variables used among several functions (as long as you are sure that it is used in only those functions) and put them in a let. Doing that is more difficult than using global variables, but it leads to code that is easier to reason about. Of course, if a variable is used in a large number of functions, it might well be a better choice to create a global variable with defvar or defparameter.
Not all LISP 1.5 code is as bad as that example!
Join us next time as we look at the LISP 1.5 library. In the future, I think I'll make some posts talking about getting specific programs running. If you see any errors, please let me know.
submitted by kushcomabemybedtime to lisp [link] [comments]

38[T4R][US/Anywhere] MTF who wants to be your Romeo...

...minus the whole double suicide thing!
Hello! I currently present female and am a 38yo trans woman who recently came to terms with my identity and have the courage to pursue it. When I transitioned non-binary wasn’t as known so I wasn’t aware it was an option, so I’m going to start re-transitioning soon. I’m currently very fem but am going for more of an ENBY appearance leaning much more towards masculine and the male aesthetic overall. He/him pronouns please. I know this is difficult and confusing given my physical presentation at the time but someone who can see me for me would be amazing!
I’m looking for someone I can get to know and build a long term relationship with. I want to fall in love, build a life together, and have our happily ever after.
Definitely open to distance and would relocate for love! Currently in the US, EST.
I tend to like people who are in good shape and take care of themselves. Romantics are a plus and a killer sense of humor is the icing on the cake. I’m pansexual so if you think we would click, message me or shoot me a chat!
I’ll save some of the details about myself for our future amazing conversations! I’m very real and serious, and you should be too. I have pics of myself to share, albeit very feminine at the moment, and a few different apps to chat on if this isn’t your preference.
Okay, stop reading now, and let me sweep you off your feet already!
submitted by Greenwith-ENBY to ForeverAloneDating [link] [comments]

How exactly does put and call work?

Hey guys,
So I did a bit of research on binary option (binary options?), and so far my understanding goes a little bit like this:
So, when you place a call, you say that you think the stock will rise in a certain period of time, and after it expired, you get the right to buy those stock for a predefined amount of money. Then you can sell those stocks again, and if the price went up, you made some money. Puts work similarly, just that instead of the right to buy its the right to sell.
Now I read somewhere (I think it was wikipedia) that binary options are also know as cash-or-nothing bets, either you make some predetermined amount of money, or you don't, and either way, you have to pay a fee. Nkw that doesn't fit in my current understanding, I'll give a short example:
Let's say I place a call on a stock, that after one day I'll be worth more than $100, which basically gives me the right to buy it in 24h for $100. Now my profit depends on how high that stock rises, like if after a day it trades for $110 I make $10 (minus fee), and if it trades for $200 I make $100 (minus fee). So definitely not "fixed return".
What did I get wrong? Thanks for you help :)
submitted by chrismit3s to stocks [link] [comments]

[Eustacchio Raulli] Simmons is the main reason Philly has a real shot this year

Very long thread from an NBA scout discussing Simmons' value and the way defense in general is played in high leverage games, worth the read imo.
Simmons is the X-factor that could put PHI over the top this year, and it has nothing to do with whether he starts taking 3s.
Let's start by discussing rim protection, which typically is the primary battleground between offense and defense.
Most of the time when discussing rim protection we talk about degree of impact at the rim (lower opp FG% in the paint) or degree of deterrence (lower frequency of FGA in the paint). Behemoths like Gobert and Embiid shine in these areas. 6 of the Top 7 in 3 yr RA-DeFG% are bigs.
From a macro perspective, these are the factors that matters most. Whatever plus-minus variant you prefer, or whichever angle you tend to watch film from, these are the players that will consistently make the highest impact defensive plays (outside the occasional pick-6).
There is a 3rd factor, however, that often goes overlooked. Moreover, it has much greater relative importance in the playoffs than the regular season -- under what circumstances can a defense maintain a measure of rim protection? This is at the core of why versatility matters.
Rudy Gobert makes the greatest degree of impact when protecting the rim of any NBA player. He also provides no rim protection when forced to defend 26 feet from the basket. The goal of the offense, then, is to create situations where he cannot protect the rim.
This isn't easy, and most offenses can't do so consistently within a 24 second shot clock. However, if you remove the subset of bad teams things change. Only the 8 best teams remain in R2 of the playoffs. Within this context sustainability of rim protection grows more important.
As far as I can tell, two key factors influence sustainability of rim protection for a defensive unit:
1) Point of attack defense
2) Rim protection redundancies
Let's discuss each in more detail.
1) Point of attack defense
Questions that are tested for each defensive unit in each matchup:
  • How frequently will the on-ball defender require help?
  • What degree of help is needed?
  • How predictable is the ensuing defensive rotation?
There are many layers to this subject.
First, how many worthwhile angles of attack does the offense have at their disposal? It's not always possible to match up the best POA defender with the ball-handler, so redundancies are needed in this area as well.
Moreover, much of the offensive strategy for each possession involves manipulating the point of attack. Pick-and-rolls, DHOs, etc are all methods of creating an advantage at the point of attack, with the value measured in X time spent to create advantage Y.
The reason teams spend so much time manipulating the point of attack is that most high value shot attempts stem from winning that battle and driving into the paint: driving layups, dump offs to bigs, kick outs to spot-up shooters.
A brief aside about how we think about what constitutes a good shot:
We need to think less about individual shots, and more about the network of shots produced by an action. A pull-up jumper may not be ideal, but if each one opens up two drives it's a good network of shots. So, the battle at the point of attack is very important to the eventual outcome of the possession. The difficulty is in determining how much value to ascribe to individual POA defenders in this regard.
One point that needs to be made:
Unless there is a significant talent gap, most POA defenders will 'lose' on most possessions. What we're really looking for is whether they lose slowly enough for the help rotation to arrive, or if they get burned and give up an easy shot.
Moreover, the results in any individual matchup will be... not quite binary, but certainly polarized. A player either holds up against his assignment, or he doesn't. I'm not certain of this, but my inclination is that it's less of a spectrum than many other facets of basketball.
A good example of this is the 2015 NBA Finals. Despite the injuries to 2/3 of the CLE Big Three, GSW had major problems at the POA early in the series. Barnes, Klay, Liv just weren't strong enough to check LBJ. Dray wasn't fast enough. CLE managed to go up 2-1. What changed?
In short, Andre Iguodala happened. He certainly didn't 'win' at the point of attack. But he did consistently lose more slowly than his teammates. This allowed Dray and Bogut to time their help defense more effectively. This illustrates why POA defense can vary greatly in value. The right defender for LBJ or KD is unlikely to be the right defender for Dame or Kyrie. This can dilute its value over the course of 82 games in +/- metrics. But in a playoff series, having the right guy matters a lot.
This is why versatility is a key characteristic for good POA defenders. Avery Bradley can defend the POA... if that POA is under 6-4, and not too strong (AKA not a playoff initiator). Teams relying on narrow players need guys that can match up with various sizes & speeds.
Bringing this back around to the original subject: Ben Simmons, the single most versatile defender in the league. He can guard almost anyone, which makes it very difficult (or simply sub-optimal) for the offense to shift the POA away from him.
This allows Philly to dictate the terms of the engagement far more than most defenses when they choose to do so. They have the personnel to pit their No. 1 POA defender against the opp No. 1 option, No. 2 vs No. 2, etc. That's rare, valuable, and could swing a postseason matchup.
The specific type(s) of POA defenders that carry the most value in a given year are dictated by the most dangerous offensive threats on contenders that season. In 2020, that's Giannis, LeBron, and Kawhi primarily, then to a lesser extent Luka, Harden, Kemba, Jimmy, Siakam, and... whomever Philly decides to run their offense through when the playoffs start.
The supporting casts matter here, too, of course. But in general how your defense matches up with MIL, LAL, and LAC is what matters most this year. Any other team will have to go through at least two of them to win. For 4 years, this was largely about Curry and LeBron. And for 4 years, there was never a defense that was equipped to handle both Curry and LeBron.
POA defense as a unit has significant value. Typically, that value is divided among many players due to varied angles of attack, skewing toward guard size players. Versatility can concentrate that value somewhat. In the playoffs, wing & forward POA defense matters most
Re: versatility, the key trait is strength for smaller players (e.g. Marcus Smart, Kyle Lowry), and lateral agility for larger players (e.g. Ben Simmons, Paul George)
Ultimately, what matters most in a team vs team matchup is how quickly the offense forces a help rotation, also, 'losing slowly' at the POA produces little value without good help defense around the POA defender. This make it a secondary trait for good team defense, but one that has magnified importance when only good defenses are left
The 2nd key for sustainable rim protection is having rim protection redundancies as a team.
How much of a gap is there between the primary (5) and secondary (4) rim protector in a lineup? Is there any tertiary rim protection provided by 1-3?
The answers to these questions impact how appealing it is for an opposing offense to try and draw the 5 out to the perimeter to defend primary actions. How much value is there drawing Dwight Howard out to the 3 point line knowing that AD will still be lurking in help defense?
On one hand, if you can force the switch a pull-up jumper vs a big does raise the baseline for a HC possession. But that's the catch-22, because pull-ups are typically a baseline rather than a desired endpoint. If that's the entirety of the plan, it rules out higher EV looks. While that higher baseline is nice, it's more valuable used as a tool to create higher EV looks. If the big is afraid of a pull-up, it will open driving lanes.
However, with redundant rim protection, the big can 'sit' on the jumper without worrying much about getting blown by. Think Kevin Love defending Steph Curry in the closing minutes of G7. He never holds up in that situation without knowing that Curry is chasing a 3PA.
That's an extreme example, but it illustrates how redundant rim protection can change the dynamic of that situation and alter the network of shots it can produce. In turn, that alters the amount of effort the offense will put into creating that situation in the first place.
So, then, what types of players create the most value in this regard? Players that provide a measure of rim protection while also being capable of holding up in perimeter defense. Draymond is the ultimate example, but also Giannis, AD, Siakam, Tucker, Isaac, Millsap, etc.
Notice a theme here? Pretty much every elite defense has one of these connecting pieces, a player that overlaps between rim protection and perimeter defense.
Moreover, these players are at the root of every successful form of small-ball. The key isn't going smaller just for the sake of more speed & skill. It's adding that w/o sacrificing rim protection. GSW was so successful because they had Dray, KD, Iggy, and Klay to defend the rim.
Bringing this back around to Ben Simmons & Philly, in addition to being the most versatile POA defender in the league he also provides a (small) measure of secondary rim protection when away from the POA. So do Horford, Tobi, and J-Rich. Also Matisse, if he gets any PS burn.
From a tactical perspective, what this means is that Philly has rim protection that is impactful, deterring, and sustainable. This will make them a tough out for any postseason opponent, regardless of their RS struggles. Joel Embiid will likely make the highest impact defensive plays for Philly in the postseason. Just realize that the multi-faceted skill set of Ben Simmons (and the rest of the supporting cast) is key role in keeping him in a position to make those plays.
Also, generally speaking, this is part of why I value versatile POA defenders like PG or Klay and connecting pieces like Siakam and Giannis more highly than +/- metrics. They help their defenses run at peak efficiency in varied circumstances.
I don't care how much you shut down bad teams in the RS. I care if you can hold up against good teams in the PS. For example, I thought Paul George deserved DPOY last year, with Giannis 2nd, and Gobert 3rd. Maybe this POV is too slanted toward versatility, but it is what it is.
Legend:
  • RS = Regular season
  • PS = Post season
  • POA = Point of Attack
  • EV = Expected Value
  • R2 = Round 2
  • DeFG% = Defensive FG%
  • DHO = Dribble Hand Off
Tweet thread
submitted by kobmug_v2 to nba [link] [comments]

23TiB On CephFS & Growing

Hardware
Previously I posted about the used 60-bay DAS units I recently acquired and racked. Since then I've figured out the basics of using them and have them up and working.
Ceph
The current setup has one unit with 60 2TB-3TB drives attached to a single R710. The other unit has 22 2TB, 3TB, and 8TB disks. It's attached to one active R710. Although as the number of 8TB disks grows I'll be forced to attached the other two R710s to it. The reason for this is Ceph OSDs need 1GB of RAM per OSD plus 1GB of RAM per TB of disk. The R710s each have 192GB of RAM in them.
Each of the OSDs is set to use one of the hard drives from the 60-bay DAS units. They also share a 70GiB hardware RAID1 array of SSDs that's used for the Bluestore Journal. This only gives each of them a few GiB of space each on it. However testing with and without this configured made a HUGE difference in performance - 15MB/s-30MB/s vs 50MB/s-300MB/s.
There is also a hardware RAID1 of SSDs that's 32GB in size. They are used for OSDs too. Those OSDs are set as SSD in CRUSH and the CephFS Metadata pool uses that space rather than HDD tagged space. This helps with metadata performance quite a bit. The only downside is that with only two nodes (for now) I can't set the replication factor above replicated,size=2,min_size=2. Once there is a third node it'll be replicated,size=3,min_size=2. This configuration will allow all but one host to fail without data loss on CephFS's metadata pool.
The current setup gives just over 200TiB of raw Ceph capacity. Since I'm using an erasure coded pool for the main data storage (Jerasure, k=12,m=3) that should be around 150TiB of usable space. Minus some for the CephFS metadata pool, overhead, and other such things.
Real World Performance
When loading data the performance is currently limited by the 8TB drives. Data is usually distributed between OSDs proportional to free capacity on the OSD. This results in the 8TB getting the majority of the data and thus getting hit the hardest. Once 8TB disks make up the majority of the capacity it will be less of an issue. Although it's possible to change the weight of the OSDs to direct more load to the smaller ones.
The long term plan is to move 8TB drives from the SAN and DAS off of the server that's making use of this capacity as they're freed up by moving data from them to CephFS.
When the pool has been idle for a bit the performance is usually 150MB/s to 300MB/s sustained writes using two rsync processes with large, multi-GB files. Once it's been saturated for a few hours the performance has dropped down to 50MB/s to 150MB/s.
The performance with mixed read/write, small files write, small file read, and pretty much anything but big writes hasn't been tested yet. Doing that now while it's just two nodes and 80 OSDs isn't the best time. Once all the storage is migrated over - around 250TiB additional disk - it should be reasonable to produce a good baseline performance test. I'll post about that then.
Stability
I've used Ceph in the past. It was a constant problem in terms of availability and crazy bad performance. Plus it was a major pain to setup. A LOT has changed in the few years since I last used it.
This cluster has had ZERO issues since it was setup with SELinux off and fs.aio-max-nr=1000000. Before fs.aio-max-nr=1000000 was changed from it's default of 500,000 there were major issues deploying all 60 OSDs. SELinux was also making things hard for the deploy process.
Deployment
Thanks to CephAdm and Docker it's super easy to get started. You basically just copy the static cephadm binary to the CentOS 7 system, have it add repos and install an RPM copy of cephadm, then setup the required docker containers to get things rolling.
Once that's done it's just a few ceph orch commands to get new hosts added and deploy OSDs.
Two critical gotchas I found were:
If you don't do the fs.aio-max-nr part and maybe the SELinux part then you may run into issues if you have a large number of OSDs per host system. Turns out that 60-disks per host system is a lot ;)
$ cat /etc/sysctl.d/99-osd.conf # For OSDs fs.aio-max-nr=1000000 
I chose to do my setup from the CLI. Although the web GUI works pretty well too. The file do.yaml below defines how to create OSDs on the server stor60. The tl;dr is that it uses all drives with a size at least 1TB for bluestore backed OSDs. It also uses any drives between 50GB and 100GB for the bluestore journal for those devices. The 'wal_slots' value is so that each logical volume only gets at most space for 60 of those on the WAL device.
do.yaml
service_type: osd service_id: osd_spec_default_stor60 placement: host_pattern: 'stor60' data_devices: size: '1T:' wal_devices: size: "50G:100G" wal_slots: 60 
Followed by
ceph orch apply osd -i do.yaml 
There are also other commands and ways to deploy OSDs.
You should totally deploy more than one mon and mds.
Web UI - Dashboard
A image of my current dashboard is here.
Edit: Formatting, links, spelling, and some new content
submitted by gpmidi to DataHoarder [link] [comments]

Differences between LISP 1.5 and Common Lisp, Part 2a

Here is the first part of the second part (I ran out of characters again...) of a series of posts documenting the many differences between LISP 1.5 and Common Lisp. The preceding post can be found here.
In this part we're going to look at LISP 1.5's library of functions.
Of the 146 symbols described in The LISP 1.5 Programmer's Manual, sixty-two have the same names as standard symbols in Common Lisp. These symbols are enumerated here.
The symbols t and nil have been discussed already. The remaining symbols are operators. We can divide them into groups based on how semantics (and syntax) differ between LISP 1.5 and Common Lisp:
  1. Operators that have the same name but have quite different meanings
  2. Operators that have been extended in Common Lisp (e.g. to accept a variable number of arguments), but that otherwise have similar enough meanings
  3. Operators that have remained effectively the same
The third group is the smallest. Some functions differ only in that they have a larger domain in Common Lisp than in LISP 1.5; for example, the length function works on sequences instead of lists only. Such functions are pointed out below. All the items in this list should, given the same input, behave identically in Common Lisp and LISP 1.5. They all also have the same arity.
These are somewhat exceptional items on this list. In LISP 1.5, car and cdr could be used on any object; for atoms, the result was undefined, but there was a result. In Common Lisp, applying car and cdr to anything that is not a cons is an error. Common Lisp does specify that taking the car or cdr of nil results in nil, which was not a feature of LISP 1.5 (it comes from Interlisp).
Common Lisp's equal technically compares more things than the LISP 1.5 function, but of course Common Lisp has many more kinds of things to compare. For lists, symbols, and numbers, Common Lisp's equal is effectively the same as LISP 1.5's equal.
In Common Lisp, expt can return a complex number. LISP 1.5 does not support complex numbers (as a first class type).
As mentioned above, Common Lisp extends length to work on sequences. LISP 1.5's length works only on lists.
It's kind of a technicality that this one makes the list. In terms of functionality, you probably won't have to modify uses of return---in the situations in which it was used in LISP 1.5, it worked the same as it would in Common Lisp. But Common Lisp's definition of return is really hiding a huge difference between the two languages discussed under prog below.
As with length, this function operates on sequences and not only lists.
In Common Lisp, this function is deprecated.
LISP 1.5 defined setq in terms of set, whereas Common Lisp makes setq the primitive operator.
Of the remaining thirty-three, seven are operators that behave differently from the operators of the same name in Common Lisp:
  • apply, eval
The connection between apply and eval has been discussed already. Besides setq and prog or special or common, function parameters were the only way to bind variables in LISP 1.5 (the idea of a value cell was introduced by Maclisp); the manual describes apply as "The part of the interpreter that binds variables" (p. 17).
  • compile
In Common Lisp the compile function takes one or two arguments and returns three values. In LISP 1.5 compile takes only a single argument, a list of function names to compile, and returns that argument. The LISP 1.5 compiler would automatically print a listing of the generated assembly code, in the format understood by the Lisp Assembly Program or LAP. Another difference is that compile in LISP 1.5 would immediately install the compiled definitions in memory (and store a pointer to the routine under the subr or fsubr indicators of the compiled functions).
  • count, uncount
These have nothing to do withss Common Lisp's count. Instead of counting the number of items in a collection satisfying a certain property, count is an interface to the "cons counter". Here's what the manual says about it (p. 34):
The cons counter is a useful device for breaking out of program loops. It automatically causes a trap when a certain number of conses have been performed.
The counter is turned on by executing count[n], where n is an integer. If n conses are performed before the counter is turned off, a trap will occur and an error diagnostic will be given. The counter is turned off by uncount[NIL]. The counter is turned on and reset each time count[n] is executed. The counter can be turned on so as to continue counting from the state it was in when last turned off by executing count[NIL].
This counting mechanism has no real counterpart in Common Lisp.
  • error
In Common Lisp, error is part of the condition system, and accepts a variable number of arguments. In LISP 1.5, it has a single, optional argument, and of course LISP 1.5 had no condition system. It had errorset, which we'll discuss later. In LISP 1.5, executing error would cause an error diagnostic and print its argument if given. While this is fairly similar to Common Lisp's error, I'm putting it in this section since the error handling capabilities of LISP 1.5 are very limited compared to those of Common Lisp (consider that this was one of the only ways to signal an error). Uses of error in LISP 1.5 won't necessarily run in Common Lisp, since LISP 1.5's error accepted any object as an argument, while Common Lisp's error needs designators for a simple-error condition. An easy conversion is to change (error x) into (error "~A" x).
  • map
This function is quite different from Common Lisp's map. The incompatibility is mentioned in Common Lisp: The Language:
In MacLisp, Lisp Machine Lisp, Interlisp, and indeed even Lisp 1.5, the function map has always meant a non-value-returning version. However, standard computer science literature, including in particular the recent wave of papers on "functional programming," have come to use map to mean what in the past Lisp implementations have called mapcar. To simplify things henceforth, Common Lisp follows current usage, and what was formerly called map is named mapl in Common Lisp.
But even mapl isn't the same as map in LISP 1.5, since mapl returns the list it was given and LISP 1.5's map returns nil. Actually there is another, even larger incompatibility that isn't mentioned: The order of the arguments is different. The first argument of LISP 1.5's map was the list to be mapped and the second argument was the function to map over it. (The order was changed in Maclisp, likely because of the extension of the mapping functions to multiple lists.) You can't just change all uses of map to mapl because of this difference. You could define a function like map-1.5,such as
(defun map-1.5 (list function) (mapl function list) nil) 
and replace map with map-1.5 (or just shadow the name map).
  • function
This operator has been discussed earlier in this post.
Common Lisp doesn't need anything like LISP 1.5's function. However, mostly by coincidence, it will tolerate it in many cases; in particular, it works with lambda expressions and with references to global function definitions.
  • search
This function isn't really anything like Common Lisp's search. Here is how it is defined in the manual (p. 63, converted from m-expressions into Common Lisp syntax):
(defun search (x p f u) (cond ((null x) (funcall u x)) ((p x) (funcall f x)) (t (search (cdr x) p f u)))) 
Somewhat confusingly, the manual says that it searches "for an element that has the property p"; one might expect the second branch to test (get x p).
The function is kind of reminiscent of the testr function, used to exemplify LISP 1.5's indefinite scoping in the previous part.
  • special, unspecial
LISP 1.5's special variables are pretty similar to Common Lisp's special variables—but only because all of LISP 1.5's variables are pretty similar to Common Lisp's special variables. The difference between regular LISP 1.5 variables and special variables is that symbols declared special (using this special special special operator) have a value on their property list under the indicator special, which is used by the compiler when no binding exists in the current environment. The interpreter knew nothing of special variables; thus they could be used only in compiled functions. Well, they could be used in any function, but the interpreter wouldn't find the special value. (It appears that this is where the tradition of Lisp dialects having different semantics when compiled versus when interpreted began; eventually Common Lisp would put an end to the confusion.)
You can generally change special into defvar and get away fine. However there isn't a counterpart to unspecial. See also common.
Now come the operators that are essentially the same in LISP 1.5 and in Common Lisp, but have some minor differences.
  • append
The LISP 1.5 function takes only two arguments, while Common Lisp allows any number.
  • cond
In Common Lisp, when no test in a cond form is true, the result of the whole form is nil. In LISP 1.5, an error was signaled, unless the cond was contained within a prog, in which case it would quietly do nothing. Note that the cond must be at the "top level" inside the prog; cond forms at any deeper level will error if no condition holds.
  • gensym
The LISP 1.5 gensym function takes no arguments, while the Common Lisp function does.
  • get
Common Lisp's get takes three arguments, the last of which is a value to return if the symbol does not have the indicator on its property list; in LISP 1.5 get has no such third argument.
  • go
In LISP 1.5 go was allowed in only two contexts: (1) at the top level of a prog; (2) within a cond form at the top level of a prog. Later dialects would loosen this restriction, leading to much more complicated control structures. While progs in LISP 1.5 were somewhat limited, it is at least fairly easy to tell what's going on (e.g. loop conditions). Note that return does not appear to be limited in this way.
  • intern
In Common Lisp, intern can take a second argument specifying in what package the symbol is to be interned, but LISP 1.5 does not have packages. Additionally, the required argument to intern is a string in Common Lisp; LISP 1.5 doesn't really have strings, and so intern instead wants a pointer to a list of full words (of packed BCD characters; the print names of symbols were stored in this way).
  • list
In Common Lisp, list can take any number of arguments, including zero, but in LISP 1.5 it seems that it must be given at least one argument.
  • load
In LISP 1.5, load can't be given a filespec as an argument, for many reason. Actually, it can't be given anything as an argument; its purpose is simply to hand control over to the loader. The loader "expects octal correction cards, 704 row binary cards, and a transfer card." If you have the source code that would be compiled into the material to be loaded, then you can just put it in another file and use Common Lisp's load to load it in. But if you don't have the source code, then you're out of luck.
  • mapcon, maplist
The differences between Common Lisp and LISP 1.5 regarding these functions are similar to those for map given above. Both of these functions returned nil in LISP 1.5, and they took the list to be mapped as their first argument and the function to map as their second argument. A major incompatibility to note is that maplist in LISP 1.5 did what mapcar in Common Lisp does; Common Lisp's maplist is different.
  • member
In LISP 1.5, member takes none of the fancy keyword arguments that Common Lisp's member does, and returns only a truth value, not the tail of the list.
  • nconc
In LISP 1.5, this function took only two arguments; in Common Lisp, it takes any number.
  • prin1, print, terpri
In Common Lisp, these functions take an optional argument specifying an output stream to which they will send their output, but in LISP 1.5 prin1 and print take just one argument, and terpri takes no arguments.
  • prog
In LISP 1.5, the list of program variables was just that: a list of variables. No initial values could be provided as they can in Common Lisp; all the program variables started out bound to nil. Note that the program variables are just like any other variables in LISP 1.5 and have indefinite scope.
In the late '70s and early '80s, the maintainers of Maclisp and Lisp Machine Lisp wanted to add "naming" abilities to prog. You could say something like
(prog outer () ... (prog () (return ... outer))) 
and the return would jump not just out of the inner prog, but also out of the outer one. However, they ran into a problem with integrating a named prog with parts of the language that were based on prog. For example, they could add a special case to dotimes to handle an atomic first argument, since regular dotimes forms had a list as their first argument. But Maclisp's do had two forms: the older (introduced in 1969) form
(do atom initial step-form end-test body...) 
and the newer form, which was identical to Common Lisp's do. The older form was equivalent to
(do ((atom intitial step-form)) (end-test) body...) 
Since the older form was still supported, they couldn't add a special case for an atomic first argument because that was the normal case of the older kind of do. They ended up not adding named prog, owing to these kinds of difficulties.
However, during the discussion of how to make named prog work, Kent Pitman sent a message that contained the following text:
I now present my feelings on this issue of how DO/PROG could be done in order this haggling, part of which I think comes out of the fact that these return tags are tied up in PROG-ness and so on ... Suppose you had the following primitives in Lisp: (PROG-BODY ...) which evaluated all non-atomic stuff. Atoms were GO-tags. Returns () if you fall off the end. RETURN does not work from this form. (PROG-RETURN-POINT form name) name is not evaluated. Form is evaluated and if a RETURN-FROM specifying name (or just a RETURN) were executed, control would pass to here. Returns the value of form if form returns normally or the value returned from it if a RETURN or RETURN-FROM is executed. [Note: this is not a [*]CATCH because it is lexical in nature and optimized out by the compiler. Also, a distinction between NAMED-PROG-RETURN-POINT and UNNAMED-PROG-RETURN-POINT might be desirable – extrapolate for yourself how this would change things – I'll just present the basic idea here.] (ITERATE bindings test form1 form2 ...) like DO is now but doesn't allow return or goto. All forms are evaluated. GO does not work to get to any form in the iteration body. So then we could just say that the definitions for PROG and DO might be (ignore for now old-DO's – they could, of course, be worked in if people really wanted them but they have nothing to do with this argument) ... (PROG [  ]  . ) => (PROG-RETURN-POINT (LET  (PROG-BODY . )) [  ]) (DO [  ]   . ) => (PROG-RETURN-POINT (ITERATE   (PROG-BODY . )) [  ]) Other interesting combinations could be formed by those interested in them. If these lower-level primitives were made available to the user, he needn't feel tied to one of PROG/DO – he can assemble an operator with the functionality he really wants. 
Two years later, Pitman would join the team developing the Common Lisp language. For a little while, incorporating named prog was discussed, which eventually led to the splitting of prog in quite a similar way to Pitman's proposal. Now prog is a macro, simply combining the three primitive operators let, block, and tagbody. The concept of the tagbody primitive in its current form appears to have been introduced in this message, which is a writeup by David Moon of an idea due to Alan Bawden. In the message he says
The name could be GO-BODY, meaning a body with GOs and tags in it, or PROG-BODY, meaning just the inside part of a PROG, or WITH-GO, meaning something inside of which GO may be used. I don't care; suggestions anyone?
Guy Steele, in his proposed evaluator for Common Lisp, called the primitive tagbody, which stuck. It is a little bit more logical than go-body, since go is just an operator and allowed anywhere in Common Lisp; the only special thing about tagbody is that atoms in its body are treated as tags.
  • prog2
In LISP 1.5, prog2 was really just a function that took two arguments and returned the result of the evaluation of the second one. The purpose of it was to avoid having to write (prog () ...) everywhere when all you want to do is call two functions. In later dialects, progn would be introduced and the "implicit progn" feature would remove the need for prog2 used in this way. But prog2 stuck around and was generalized to a special operator that evaluated any number of forms, while holding on to the result of the second one. Programmers developed the (prog2 nil ...) idiom to save the result of the first of several forms; later prog1 was introduced, making the idiom obsolete. Nowadays, prog1 and prog2 are used typically for rather special purposes.
Regardless, in LISP 1.5 prog2 was machine-coded subroutine that was equivalent to the following function definition in Common Lisp:
(defun prog2 (one two) two) 
  • read
The read function in LISP 1.5 did not take any arguments; Common Lisp's read takes four. In LISP 1.5, read read either from "SYSPIT" or from the punched carded reader. It seems that SYSPIT stood for "SYStem Paper (maybe Punched) Input Tape", and that it designated a punched tape reader; alternatively, it might designate a magnetic tape reader, but the manual makes reference to punched cards. But more on input and output later.
  • remprop
The only difference between LISP 1.5's remprop and Common Lisp's remprop is that the value of LISP 1.5's remprop is always nil.
  • setq
In Common Lisp, setq takes an arbitrary even number of arguments, representing pairs of symbols and values to assign to the variables named by the symbols. In LISP 1.5, setq takes only two arguments.
  • sublis
LISP 1.5's sublis and subst do not take the keyword arguments that Common Lisp's sublis and subst take.
  • trace, untrace
In Common Lisp, trace and untrace are operators that take any number of arguments and trace the functions named by them. In LISP 1.5, both trace and untrace take a single argument, which is a list of the functions to trace.

Functions not in Common Lisp

We turn now to the symbols described in the LISP 1.5 Programmer's Manual that don't appear in Common Lisp. Let's get the easiest case out of the way first: Here are all the operators in LISP 1.5 that have a corresponding operator in Common Lisp, with notes about differences in functionality where appropriate.
  • add1, sub1
These functions are the same as Common Lisp's 1+ and 1- in every way, down to the type genericism.
  • conc
This is just Common Lisp's append, or LISP 1.5's append extended to more than two arguments.
  • copy
Common Lisp's copy-list function does the same thing.
  • difference
This corresponds to -, although difference takes only two arguments.
  • divide
This function takes two arguments and is basically a consing version of Common Lisp's floor:
(divide x y) = (values-list (floor x y)) 
  • digit
This function takes a single argument, and is like Common Lisp's digit-char-p except that the radix isn't variable, and it returns a true or false value only (and not the weight of the digit).
  • efface
This function deletes the first appearance of an item from a list. A call like (efface item list) is equivalent to the Common Lisp code (delete item list :count 1).
  • greaterp, lessp
These correspond to Common Lisp's > and <, although greaterp and lessp take only two arguments.
As a historical note, the names greaterp and lessp survived in Maclisp and Lisp Machine Lisp. Both of those languages had also > and <, which were used for the two-argument case; Common Lisp favored genericism and went with > and < only. However, a vestige of the old predicates still remains, in the lexicographic ordering functions: char-lessp, char-greaterp, string-lessp, string-greaterp.
  • minus
This function takes a single argument and returns its negation; it is equivalent to the one-argument case of Common Lisp's -.
  • leftshift
This function is the same as ash in Common Lisp; it takes two arguments, m and n, and returns m×2n. Thus if the second argument is negative, the shift is to the right instead of to the left.
  • liter
This function is identical in essence to Common Lisp's alpha-char-p, though more precisely it's closer to upper-case-p; LISP 1.5 was used on computers that made no provision for lowercase characters.
  • pair
This is equivalent to the normal, two-argument case of Common Lisp's pairlis.
  • plus
This function takes any number of arguments and returns their sum; its Common Lisp counterpart is +.
  • quotient
This function is equivalent to Common Lisp's /, except that quotient takes only two arguments.
  • recip
This function is equivalent to the one-argument case of Common Lisp's /.
  • remainder
This function is equivalent to Common Lisp's rem.
  • times
This function takes any number of arguments and returns their product; its Common Lisp counterpart is *.
Part 2b will be posted in a few hours probably.
submitted by kushcomabemybedtime to lisp [link] [comments]

Survey results and mod application form

Hi y’all,
The survey has been up for little while, I’ve gotten a lot of answers, most of them very helpful, from you.
So, I wanted to go through the results, for those of you who are interested. I’ll be specific with numbers when interesting, but I will mainly be discussing what I found interesting, informative or the likes, reading this. You can find the mod application form at the bottom.
There are also some general notes at the bottom.
Remember that we have a discord. You can join here: https://discord.gg/U4V4JQH
Q1: Which country or region are you from? Be as vague or specific as you are comfortable with.
The vast, vast majority of you are from the US. To a surprising degree. Among US-citizens New York and California had the most representatives, confirming all of my prejudices about USA, that it’s really nothing but Manhattan and LA. No, I’m kidding.

Q2: How old are you?
44.8% are 20-25 (like me!), and 48.3% are 14-19 years old. Which is what to be expected. The oldest was a single person in the age bracket of 31-35.

Q3: What is your current occupation?
We have 4 PhD, and 4 post-grad students, 2 people in the workforce and the rest is split down the middle between high-school and undergrad-students.
I found it funny how many had given their own answers specifying that they are working part time while studying. In my home country, when you are asked your occupation on a survey there is only either student, unemployed, retired or employed essentially, because most people work while studying. Anyway, I hadn’t considered some of you wanted to specify, but I wanted to make clear, that I am fully aware that many, many students also work (including myself)

Q4: If you want to, please tell me about your major: Which are you considering/did you choose and why? If you are out of school or working part-time within your desired field, feel free to tell me what you are working with as well.
So, these are obviously individual answers, but I can say many, many studied or wanted to study the fine arts and Classics (obviously), as well as science (especially chemistry! Are y’all mad scientists?) and psychology. Not a lot of representation from the social sciences and STEM minus S.
My favourite answer to this questions ended with the following: “I live for bringing beautiful things into this ugly world and being dramatic”. If that ain’t this community, then I don’t know what is.

Q5: What’s your gender?
3/4ths identify as female, 10% as male and the rest of you a mix of non-binary (which I forgot as an option, sorry), genderfluid, gender neutral, and prefer not to disclose.

Q6: Where did you first learn of Dark Academia?
Most people found DA on Instagram (36.2%). That really surprised me, since Instagram is one of the only possible social media platforms where I haven’t seen DA.
After Instagram, in the order of frequency, we have: Tumblr, Strange Æons’ video, people IRL, Pinterest (I’m so happy I’m not the only one who still uses Pinterest), Donna Tartt’s novels, Dead Poets society, others

Q7: Explain briefly and with your own words; what is Dark Academia?
Again, individual answers. But honestly, I’m going to be borrowing from a lot of your answers when writing “about”-sections in the future, because most of you were so, so eloquent in your explanations, damn. But since I loved so many of the answers, I decided to turn some of them into user flairs, so go nuts in those!

Q8: Which parts of Dark Academia appeals to you personally?
The most popular answer here, by far, was: The general values of thirst for knowledge, etc, (93.7%) which I think is very neat. Other than that you are generally primarily into the aesthetics, fashion, art, and literature.

Q9: Where did you first learn of DarkAcademia?
The vast majority (79%) of you actively searched for a DA-subreddit and found us. How nice.

Q10: Have you participated in any community activities such as introductions, book club, pen pals or the likes? If so, do you remember which?
Most of the answers to this questions were along the lines of “Not very much, but I’d like to”, which is great. We’re a new community, and according to most literature on the topic, the vast majority of Social media users are spectators, rather than commentator or participators, so that’s to be expected.
But my favourite answer for this questions was:
“yES!”

Q11: What do you most come to the subreddit to find?
Again, primarily discussions on literature, art, fashion, and aesthetics as well as community.

Q12: What would you like more of?
Generally, it seems like you want more aesthetic and outfit-based posts, which I definitely get, I’ve been missing that too. Maybe we should do a weekly thread for outfits or something like that? What do you think?

Q13: What would you like less of?
A large minority (23%) wants less of a discussion of contemporary literature such as The Secret History.
Other than that most of you didn’t have anything you would like less of.

Q14: Do you have any suggestions for things you would love on the subreddit, or things that you don't like that you would like to see changed? Or generally, if you have anything you'd like to be sure you said, now's the time!
Individual answers, here. Many reiterated that you want more fashion and aesthetics -- especially personal examples from users and not picture-perfect outfits/rooms/etc from Tumblr or the likes. I think that is such a good point.
Another person suggested having older members of the community teach younger members in their fields of study. Is that something you would be into? I would love to facilitate a weekly or monthly lecture from someone in the community.

Q15: Do you believe in aliens? Why/why not?
The vast majority believes in aliens, and a big minority said they don’t believe in aliens as the concept, but do believe that there is other life in the universe – I love a pedantic <3

Q16: Do you believe in ghosts? Why/why not?
A pretty big amount of you believe in ghosts.
This was my favourite response: “As a pan-culture cultural folklore phenomenon: yes. As is "I saw a ghost and it stole my cornchips": no.”

Q17: Do you believe in astrology? Do you know your sign(s)? If so, what are they?
Even though, astrology seems to be one of the biggest supernatural trends in recent years, y’all haven’t fallen for that, and most of you don’t believe in it. Some thinks it sort of fun or interesting, and some used it as prompts for introspection. None believed 100%. Most of you were pisces. Like, a third of those who shared their sign was pisces. Pisces are supposedly creative and emotional, very much the artist sign. So that’s fun.

Q18: Do you believe in Myers-Briggs' 16 personalities? Do you know yours? If so, what is it?
Again a lot didn’t believe, but most new their personality type. To literally NO ONE’S surprised the VAST majority are either INFJ or INTJ, which are both introverted people who like to engage with abstract thoughts and organizing said thoughts into plans.
Also, fun fact only a single person had the S (instead of N), making them sensing, meaning more practical and down to earth. I thought that was interesting as well.

Q19: Do you know which Hogwarts house you identify with?
50,7% Ravenclaw (to no one’s surprise), 25,4% Slytherin, 6% Hufflepuff, 6% don’t identify with a specific house, and the rest are either weird combos, Gryffindors or long explanations that I didn’t read.
Q20: What's your favourite book?
A lot of classics like Dracula, Anna Karenina, Kafka, a lot of DA-novels such as If We Were Villains and The Secret History, whoever said “The Perks of Being a Wallflower or The Great Gatsby” – I would have wanted to be your best friend, when I was a freshman, also whoever said Momo, that’s an amazing book and no one knows it, but I’m so glad you do, and honestly it’s such a DA book.
This respond honestly moved me: “I could as soon pick a favourite star from the heavens”.
All in all, I just thought you all in general picked realy awesome books and I’m surprised at how many of my personal favourites I saw among your answers, so that really excites me!

Q21: What's your favourite film?
A lot of really different films were answered. Lots that I have never heard of, although Pirates of the Caribbean, Spiderman: Into the Spider-verse were also represented.
However, this was the best answer: “(and shrek 2 but tell nobody)” – sorry, I just told EVERYBODY.
Also: “A mix between Some Like It Hot and Dorian Gray.” That’s how people generally describe me 😉

Q22: What's your favourite song/album?
Until this survey, I thought Hozier was over and long forgotten. But DAMN you guys sure love Hozier.
“The entire discography of The Smiths” – This was the answer I was expecting.

Q23: Last but not least: Share a short poem or quote, that you love, with me!
Honestly, this question were just for my own sake.
But the American teenager who SHOCKED me by knowing Tove Ditlevsen gets to finish this walk-through off with the poem they shared: "In childhood's long night, both dim and dark/ there are small twinkling lights that burn bright / like traces memory's left there as sparks / while the heart freezes so and takes flight/... Your faith you took with you to great extremes / the first and the last to your cost / in the dark now somewhere it surely gleams/ and there is no more to be lost/ and someone or other draws near to you but/ will never quite manage to know you/ for beneath those small lights your life has been put / since when everyone must forego you"

General notes
Then there were a couple of requests for more STEM and male content, so if anyone was afraid or wary of sharing their STEM or male content, now is your time to shine, the interest is definitely there!
Another note, I’d like to add on: Dark Academia can be a lot of things and I can’t define it for everyone, nor will I try to. However, Dark Academia is not only about being studious or type A. And this subreddit, as long as I am a mod, will definitely not be “conservative” (I don’t mean politically, but rather philosophically). This is an aesthetic inspired by Ancient Greece, Oscar Wilde, Donna Tartt, it is never going to stop being hedonistic and indulgent to a certain degree, and that’s part of the appeal, I think. And also, of course, we’re going to keep being tolerant of everyone no matter race, ethnicity, gender, sexuality.
I also want to be putting a stop to the very repetitive “what should I wear”/”Where should I buy DA-clothes?”, but I don’t quite know the solution yet.

MOD APPLICATION: https://forms.gle/ZUBhxox5Pt9PzXSg6
submitted by VanGoghNotVanGo to DarkAcademia [link] [comments]

.DMG, .PKG and Zip - Differences

Hi everyone,
I'm new to Mac (moved from Linux). For my day to day needs, I use 'Brew' to consume I need. However, sometimes a package does not exists in Brew, and I have the option to select either DMG File, PKG pr ZIP.
Zip means I unpack it myself in "/opt". And if I'll even want to remove it, I can just remove the /opt folder and remove anything I might have done manually (like Linking a file to /uslocal/bin). Updating is of-course manual.
But I'm not sure how DMG and .PKG works. Sometimes DMG means just drag the file into the Application folder (which I assume is Just Binaries that are getting copied). I couldn't find a way to "Uninstall" an installed DMG file. So that's a minus. Are DMG installation get at least auto-updated?
What about Pkg? Do they have uninstaller that remove all it extracts? Does it allow auto-updates?
Thanks!
submitted by Tall-Guy to MacOS [link] [comments]

Looking for practice? Want to expand your AHK knowledge? I got you covered.

I made a reply a while ago to Swaggurttt (could you give us an update of how things have been going?)
He wanted to learn more about AHK. So I provided him a list of new things to learn past just "press button > send keys".
Hopefully, some people reading will take this opportunity to branch out and learn some new things that AHK is capable of. From stepping into the aesthetically pleasing world of GUIs to using RegEx to become a string manipulating master. From braving the cryptic DllCall() command that lets you embrace code from other files thus making your scripts much more robust and useful, to having a whole slew of problems and puzzles that will test your ability to utilize AHK's capabilities.

Practice, Problems, and Challenges - It's like fun homework

Let's start with 4 websites that will give you tons of practice. From easy to insanely difficult. Between these 4 sites, you should have more things to do than you could ever finish.
Code Abbey
The site I've spent the most time on. From the easiest "add two variables" all the way to "write an AI". It's a good place to learn core programming skills and develop logic. Parsing through data, calculating variables, using arrays, etc...
Funny thing is this website is the reason I'm making this post. It has been a while since I used this site and I couldn't remember the address. So I looked up this post and...here we are!
Rosetta Code
Another good site, though I like Code Abbey's layout, sorting, and input/output method more. Rosetta has its own pros, like showing you solutions in TONS of different languages. Very helpful if you're familiar with other programming languages.
Code Chef
This was suggested to me a while ago and I've only done a couple of problems. Not because it's a bad site, but because I just haven't had the time to try and nuke the list. I figured it was worth including.
Those 3 should keep you busy for quite some time. Plus...
The AHK Subreddit
This subreddit is a treasure trove of problems! I used to spend a ton of time just trying other people's problems, coming up with my own solutions, and comparing what I come up with to others. You can learn a TON doing it this way. And comparing answers afterward only teaches you newer and better ways of doing things. Consider the unbelievable amount of backlogged posts you can go through. Years and years worth.

How about some suggestions for parts of AHK to learn?

RegEx (Regular Expressions) - Master of Strings

This is a mini-language for manipulating strings. Learn it! If there's a discernible pattern to what you're looking for, you definitely can write a RegEx to find it. See: RegExMatch and RegExReplace. Bonus: It should be noted that RegEx is its own little mini-language with its own rules and syntax. BUT, once you learn it, you now know it for almost every other programming language (minus some discrepancies between flavors).
RegEx Resources:

COMs - Letting you interface with other shit one command at a time!

COMs are pretty amazing. They let you interact with lots of different things on windows. Microsoft lets you access things like Internet Explorer, Excel, Word, Access, (literally the entire office suite), shell, WIN HTTP, VBScript, etc... It lets you use those programs directly from AHK. This increases reliability near infinitesimally compared to blind clicking and typing. You can web scrape like a boss using the IE COM. You can manipulate Excel spreadsheets, get data from them, update them, and whatever else you want. COMs are handy.
Resources:
There are also quite a few videos on YouTube that you can also check out.

GUIs - I feel pretty. Oh so pretty....

Learn to make and manipulate Graphical User Interfaces or GUIs. When you want user-friendly interaction with the users of your code, GUIs can be the perfect answer. A non-programmer isn't going to want to run scripts with switches or open up .ahk file or edit code to change settings. Enter the GUI!
The "Read This Before Posting!" stickied tutorial post has some good WYSIWYG suggestions. And it stands for What You See Is What You Get...that's really what they're called. Personally, I'm a fan of GUI Creator by Maestrith.
You'll spend a LOT of time trying to learn all the different things GUIs can do.
The AHK docs are the go-to for this stuff. Here are the pages you'll be visiting quite often:
I'd like to add a neat method I've started doing for tracking elements because it used to be a struggle for me. When I create a GUI, I like to keep everything inside of functions and I really don't want to create global variables for everything. I find myself making a single global Object in the AES. Then, whenever I create a GUI element, I always add the hwnd option to it and then immediately save that HWND to the array. Plus, you can logically name it so it's much easier to recall.
Example:
Global guiHwnd := {} NewGUI() MsgBox, % "Cancel Btn: " guiHwnd.CancelBtn "`nOK Btn: " guiHwnd.OKBtn ExitApp NewGUI(){ Gui, New Gui, Add, Button, hwndBtn gOKBtn, OK guiHwnd.okBtn := btn Gui, Add, Button, hwndBtn gCancelBtn, Cancel guiHwnd.cancelBtn := btn Gui, Show Return } 
GUIs are an excellent segue into DllCalls. Why? Because DllCall can let you fine-tune a GUI.

DllCall - Rule #1 of coding: Don't reinvent the wheel!

One thing we learn real quick in programming is you don't rewrite code that's already been created and thoroughly tested. It's a waste of time! That's why people will bundle up their code into these neat packages called DLLs (Dynamic Link Libraries) and then publish them for others to use. These let you call commands and functions outside of the native AHK language. Meaning you can interact with Window's internal functions directly from your script! This opens up a TON of possibilities for any script and unlocks some of the restrictions that come with AHK. Like changing things about GUIs that AHK doesn't have an option for. Or getting info directly from the operating system because we don't have an AHK command or function created to do so.
It's not just limited to Windows functions. It can access any DLL. As long as you know how to interface with it.
This MSDN link has a top-level link to all of the core things you need to learn about DLLCalling to windows.

GDI+ - Giving you the ability to create and manipulate graphics on the Windows level

GDI is Window's Graphics Device Interface. A user named Tic (Tariq Porter) wrote GDI+ for AHK. It handles ALL the DllCalls you need to make to the GDI to manipulate graphics, draw shapes and objects, import pictures, etc. You know all the stuff you can do in Paint? You can do ALL that anywhere at any time on the screen using AHK & GDIP. Without having to ever load Paint. Please note the GDI+ repo also includes tutorials on how to use the library. It doesn't cover everything quite as in-depth as I'd like it to, but the examples will give you plenty to go off of.
Why are you still reading this? You should have gotten distracted way up top and started trying stuff on Code Abbey!
OK, one challenge I always like to give people.
Recreate the Window's calculator. And make it work exactly the same. Initially, it sounds easy! But, duplicating the functionality AND the aesthetics can be pretty tricky. This is actually a tough challenge with lots of parts. You'll have to make a GUI that looks as close as possible to calc.exe. Make the display function the same, make every button work correctly, calculations should work, memory buttons should function, etc. Don't forget to make an icon for it and also to disable the AHK system tray icon, just like the real thing. Oh! And recreate the menu, too. This will give you practice on TONS of different aspects. It's complex enough to be challenging but not so complex no one would ever want to do it.
If you can get this done and want to extend things further, try making the scientific version of the calculator!! There's a real challenge. The extra advanced math buttons each have to work correctly.
Go, try, learn. If you get stuck, come back to the sub and ask for help. Or hit up the Discord crew. While you're waiting for an answer, you can always go through some of the current questions on the sub.
I hope you guys enjoy this post.
submitted by GroggyOtter to AutoHotkey [link] [comments]

TrapoChapHouse just released a user survey. It's about what you'd expect. tl;dr inside

As part of a 125000 subscriber special, ya bois in CTH released a survey(archived) of their users. If you've been wondering what makes the average commie tick, there's some good info to be gleaned here.

Data

The survey had a total of 6,672 responses as I write this - this was a self-select poll, so there should be some skew expected, but we can expect this to be more than a representative sample, with a 1.53% margin of error at a 99% confidence level.

Age

Nearly 2/3 (61.5%) of CTH users are at or under under the age of 25, with the single largest group being 18-21, at 26.4%. More than a third (36.6%) are at or below the legal drinking age.
Reddit, on the other hand, has the largest category of its users in the 30-49 age range at 34%, and 22% between 18 and 29. https://www.techjunkie.com/demographics-reddit/#Age_and_Gender.
Chapo's userbase is therefore a great deal younger than most of Reddit.

Gender

Mostly guys - 79.6% male, 12.4% female. This shows a troubling lack of representation, being more skewed Male than Reddit as a whole at 67-69%.
Chapo, for all their bluster, is another patriarchy-dominated space with comparatively little female representation. Yikes. Perhaps women just aren't into advocating murder?
The next option down is mentally ill non-binary with 5.6%, with a smattering of troll answers and "rather not say" claiming the last 2.4%.
Speaking of mental illness, nearly one in ten (9.2%) identify as transgender, with that number going up to 13.1% if we include the "I am unsure" responses. (Quick show of hands, who here is sure whether they're trans or not?)

Race

Mostly white, as if that were a surprise. 77.6% of Chapos are white. Going down the list from there, we have 6.3% Hispanic, 2.8% black, 2.5% Indian (dot, not tipi), 2.3% Arab, 2.1% east Asian, with smaller groups claiming insignificant percentages from there.
This makes CTH less diverse than Reddit, which is 65% white, 15% hispanic, and 12% black as its largest groups.
So, we can conclude that Chapo is not only sexist, it is also racist. After all, big tech has taught us that identity is everything, and it's the color of your skin, not the content of your character, that matters.

Mental illness (again)

Nearly a fifth (18.6%) of Chapos identify as "neurodivergent". If that sounds like some bit of intersectional pomo newspeak horseshit, you'd be right.
The classification of neurodivergence (e.g. autism, ADHD, dyslexia, bipolarity) as medical/psychiatric pathology has no valid scientific basis.
In any case, you can substitute this term for the phrase "mental illness", either self-diagnosed or otherwise, and be correct.

Celibacy

51% of Chapos engage in a bit of the old in out in out, while 49% are some variety of incel.

Language

13.6% of Chapos do not speak English as their primary language. Given the USA's most common language is English, 80% of the time, some percentage of these people are likely not US citizens. We'll get back to that in a minute.
I'd be willing to bet, but cannot prove, these are the people most vocal about how the US should be run.

Religion

Okay, before reading further, just guess what the primary religious beliefs of the average ersatz-communist is.
Guesses made?
50.8% of Chapos claim to be atheist, with a further 26% claiming to be agnostic (a pretty meaningless term).
So, communists are mostly godless. This isn't exactly unknown.
As to the remaining 76.8% of Chapos who don't disclaim religion outright, 7.6% (with some sub-0.1% amount +/- since the chart doesn't go into detail on the tiniest responses) claim some sort of Christianity.
Someone wanna explain that one to me?
Comparing that to Reddit.. there's not a lot of good data. We can make a guess though, given the general popularities of /atheism vs /christianity (and its offshoots like /catholicism and /orthodoxchristianity). The educated guess would be that Reddit skews atheist, but it's impossible to say with certainty how this stacks up vs Chapo.

Country

67.6% of Chapos live in the USA, which means just under a third (32.4%) live outside the country. The largest group of the non-Americans are A FUCKING LEAF (Canada) with 6.6%, the limey Brits claiming another 6.6%, and Old Zealand with 3.6%.

Where in the USA (is Carmen Sandiego?)

The single largest region claimed by Chapos is the Midwest, with 22.7%, followed by northeast at 19.6%, southeast at 13.7%, and the west coast with 11.8%.
If we divide the entire group of respondents into whether they're "coastal" or not, that gets us roughly 56.9% (plus or minus a few, since the region categories are fuzzy).
Either way, the "coastal left" stereotype seems mostly confirmed.

Education conditions

The "do you have a degree" split is right down the middle, with 50.5% of Chapos having no degree, and 49.5 having an associates or better. The largest category, 29.8%, has a bachelors.
5.7% of Chapos never finished high school, and 29.2% started and never finished college.

Employment

Place your bets..
40% of Chapos are unemployed or employed less than full time.
18.5% claim "employed student", but this could be either full or part time. Given what communists think of capitalism and hard work, the answer is likely part, but it is not possible to say definitively from the answers.
If I make the easy generalization and say that those employed students are all working part time, the previous figure jumps to 58.5% of Chapos being unemployed or employed less than full time.
35.6% are employed full time or self-employed. You may now proceed to speculate about what "self employment" is for a capital-hating communist.

Finances

Now we get to the good stuff.
43% of Chapos make less than 20K a year. Given the previous stat that places most of them in coastal areas, likely with high costs of living, I would venture an educated guess that most are below the poverty line.
Remember that 30K a year is $15/hour before taxes. With that line in the sand drawn, it means that 44.3% of Chapos are making at or below entry level wages.
A few of them (4.6%) claim to make more than $100K a year. These users should leave immediately, because they'd be first up against the wall when the communist revolution takes hold.
94.7% of Chapos are in some kind of debt with 48.2% of that debt being, unsurprisingly, student loans. 42.2% of Chapos have a debt value below $1K, with 52.5% having more than that.
71.8% describe their material conditions as "comfortable" or "adequate".

Living conditions

Again, place your bets, and get ready to have a stereotype confirmed
.
.
.
.
.
.
.
.
.
.
41.9% of Chapos live with their parents!
20.8% live with friends or roommates, with an additional 20% living with a significant other. 13.7% live alone.
Only 8.9% of Chapos are homeowners.

Ideology

Some form of extreme leftism is, shockingly enough, the most common ideology claimed.
This was clearly a "pick multiple" answer because the percentages given go above 100%, but "democratic socialist" was the most common choice with 35.2%, 22.2% communist, 18.1% Marxist, and so forth.
2% actually claimed Juche (the extreme socio-political system North Korea runs under) - whether this was for pure meme value or sincerely held, it's hard to say.
54.2% of Chapos claim to be a member of a political part or organization.

Voting

Only 37% of Chapos voted in the 2016 primaries. Of those that couldn't:
30.4% were not American (shock!) 18.2% were too young 12.1% just didn't vote 2.3% lost the right to vote
(These numbers are roughly the same for voting in the Presidental election, only varying by a few percent)
Speaking of which, 52.7% of Chapo voters voted for Bernie, with 39.9% voting for Hillary during the primaries. WHen it came to the presidential election, 45.4% voted for Hillary, with a full third deciding not to vote outright once Bernie was out of the running.
Maybe CTH should shut the fuck up about American politics if they can't/won't exercise their ability to do anything about it?
If the Dem primaries were held today, 71.8% of Chapos would vote again for Bernie.

Conclusions:

The average Chapo:
submitted by Shadilay_Were_Off to ShitPoliticsSays [link] [comments]

Adding cover artwork to CDI disc images for GDEMU/GDMENU

A question came up from u/pvcHook in a recent post about adding artwork to GDI images: can the same be done for games in a CDI format? The answer is yes, and the general process is the same as it is for the GDI games. I've already added all of the appropriate artwork to all of the indie shmup games and all that; can I share those here, or is that a no-no? Because if that's all you're here for it, that would be a lot easier than putting yourself through this process. But it's something to learn, so read on.
First, if you want to do this, you're going to need the proper tools. Someone put together a CDI toolkit (password: DCSTUFF) of sorts on another forum; this is basically the same thing with a few additions and tweaks I've made; before you begin install ISO Buster from the 'isobuster' folder. You will also need the PVR Viewer utility to create the artwork files for the discs. The images you generate will need to be mounted to a virtual drive, so Daemon Tools or some other drive emulation software will also be required. And finally you'll need a copy of DiscJuggler to write your images into a format useable by an emulator or your GDEMU.
EXTRACTION
Here are the general extraction steps, I'll go into a bit more detail after the list:
  1. Copy your CDI image to the 'cdirip' folder in the toolkit and run the 'CDIrip pause.bat' file. Choose an output directory (preferably the 'isofix' folder) and let it rip. You will need to note the LBA info of the tracks being extracted (which is why I made this pause batch file). If only two tracks are extracted, then look closely at the sizes of the sectors that were extracted. If the first track is the larger of the two, then you will not need to use isofix to extract the contents. If the second track is the larger of the two, make note of its LBA value to use with isofix to extract its contents.
  2. Make sure you have installed ISO Buster, you will need it beyond this point.
  3. Go to the 'isofix' folder and you will see the contents of the disc. There will be image files named with the 'TData#.iso' convention and those are what we need to use. The steps diverge a bit from this point depending upon the format of the disc you just extracted; read carefully and follow the instructions for your situation.
  4. If the first track extracted in step one was the larger of the two tracks, open it in ISO Buster and go to step #7.
  5. If the second track extracted in step one was the larger of the two tracks, open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  6. If CDIrip extracted a bunch of wave files and a 'TData#.iso' file, the disc you extracted uses CDDA. Open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  7. In the left pane of ISO Buster you'll see the file structure of the iso file you opened; expand the tree until you see a red 'iso' icon and click on it. This should open up the files and folders within it in the right pane. Highlight all of these files, right click and choose 'Extract Objects'; choose the 'discroot' folder in the CDI toolkit.
Your CDI image is now extracted. Please note that all of the indie releases from NGDEV.TEAM, Hucast.Net, and Duranik use the CDDA format. You'll see the difference when it's time to rebuild the disc image. Also, if you're using PowerShell and not command prompt, the prompts to run the command line utilities are a bit different; you would need to type out '.\isofix' (minus quotes) to execute isofix, for example.
COVER ART CREATION
There are other guides out there concerned with converting cover art files into the PVR format that the Dreamcast and GDEMU/GDMenu use, so I won't go into great detail about that here. I will note, however, that I generally load games up in Redream at least once so it fetches the cover art for the games. They are very good quality sources, and they're 512x512 so won't lose any quality when you reduce them to 256x256 for the GDMenu.
I will say, however, that a lot of the process in the guide I linked to is optional; you can simply open the source file in PVR Viewer and save it as a .pvr file and it will be fine. But feel free to get as detailed as you like with it.
REBUILDING
Once you have your cover art to your liking, make sure it's been placed in the 'discroot' folder and you can begin the image rebuilding process.
We'll start with an image that doesn't use CDDA:
  1. Check the 'discroot' folder for two files: 1ST_READ.BIN and IP.BIN. Select them, then copy and paste them into the 'binhack32' folder in the toolkit. Run the binhack32.exe application in the 'binhack32' folder (you may have to tweak your antivirus settings to do this).
  2. Binhack32 will prompt you to "enter name of binary": this is 1ST_READ.BIN, type it correctly and remember it is case sensitive. Once you enter the binary, you will be prompted to "enter name of bootsector": this is IP.BIN, again type correctly and remember case.
  3. The next prompt will ask you to update the LBA value of the binaries. Enter zero ( 0 ) for this value, since we are removing the preceding audio session track and telling the binaries to start from the beginning of the disc. Once the utility is done, select the two bin files, then cut and paste them back into the 'discroot' folder; overwrite when prompted.
  4. Open the 'bootdreams' folder and start up the BootDreams.exe executable. Before doing anything click on the "Extras" entry in the menu bar, and hover over "Dummy file"; some options will pop out. If you are burning off the discs for any reason, be sure to use one of the options, 650MB or 700MB. If you aren't burning them, still consider using the dummy data. It will compress down to nothing if you're saving these disc images for archival reasons.
  5. Click on the far left icon on the top of BootDreams, the green DiscJuggler icon. Open or drag'n'drop the 'discroot' folder into the "selfboot folder" field, and add whatever label you want for the disc (limited to 8 characters, otherwise you'll get an error). Change disc format to 'data/data', then click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. Choose an output location for the CDI image, and let the utilities go to work. If everything was set up properly you'll get a new disc image with cover art. I always boot the CDI up in RetroArch or another emulator to make sure it's valid and runs as expected so you don't waste time transferring a bad dump to your GDEMU (or burning a bad disc).
If your game uses CDDA, the process involves a few more steps, but it's nothing terribly complicated:
  1. Check the 'discroot' folder for the IP.BIN file. If it's there, everything is good, continue on to the next step. If it's not there, look in the 'isofix' directory: there should be a file called "bootsector.bin" in that folder. Copy that file and paste it into the 'discroot' folder, then rename it IP.BIN (all caps, even the file extension). Now you're good, go on to the next step.
  2. Remember all those files dumped into the 'isofix' directory? Go look at them now. Copy/cut and paste all of those wave files from 'isofix' into the 'bootdreams/cdda' folder.
  3. Start up the bootdreams.exe executable from the 'bootdreams' folder.
  4. Select the middle icon at the top of the BootDreams window, the big red 'A' for Alcohol 120% image. Once you've selected this, click on 'Extras' up in the menu bar and make sure the 'Add CDDA tracks' option is selected (has a check mark next to it).
  5. Open/drag'n'drop the finished 'discroot' folder into the selfboot folder field; put whatever name you'd like for the disc in the CD label field. Click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. A window showing you the audio files in the 'cdda' folder will pop up. Highlight all of them in the left pane and click the right-pointing arrow in the middle of the two fields to add them to the project. Make sure they are in order! Then click on OK. The audio files are converted to the appropriate raw format and the process continues. Choose an output location for the MDS/MDF files.
  8. When the files are finished, find them and mount them into a virtual drive (with Daemon Tools or whatever utility you prefer). Open up DiscJuggler and we'll make a CDI image.
  9. Start a new project in DiscJuggler (File > New, then choose 'Create disc images' from the menu). Choose your virtual drive with mounted image in the source field, and set your file output in the destination field. Click the Advanced tab above, and make sure 'Overburn disc' is selected. Click Start to begin converting into a CDI image.
  10. When DiscJuggler is done, close it down, unmount and delete the MDS/MDF files created by BootDreams, and test your CDI image with RetroArch or another emulator before transferring it to your GDEMU.
If you have followed these steps and the disc image will absolutely not boot, then it's possible that a certain disc layout is required and must be used. I have only run into this a few times, but in this situation you simply need to use the 'audio/data' option for the CDI image in Bootdreams to put the image back together. Please note: if you are going to try to build the image with the 'audio/data' option, then make sure you replace the IP.BIN file in the 'discroot' folder with the original, unmodified bootsector.bin file in the 'isofix' folder. The leading audio track is a set size, and the IP.BIN will be expecting this; remember, the IP.BIN modified by binhack32 changes the LBA value of the file and it won't work properly with the audio/data method.
These methods have worked for me each and every time I've wanted to add artwork to a CDI image, and it should work for you as well. This will also keep the original IP.BIN files from the discs, so it should keep anything that references this information intact (like the cover art function in Redream). If it doesn't, then the rebuilt images with artwork can be used on your GDEMU and you can keep the original disc images to use in Redream or wherever.
Let me know if anything is unclear and I can clean the guide up a bit. Or if I can just share the link to my Drive with the images done and uploaded!
submitted by king_of_dirt to dreamcast [link] [comments]

สอน Binary Options สำหรับมือใหม่เริ่มต้นจาก 0 - YouTube Integer Linear Programming  0-1 Binary Constraints  Examples - Part 1 binany live trading  Binany 2550 rs win Live trading  Binany trick  binany strategy win trade Binany Loss Recovery Trick  How to Recover binany loss BK Technical  Binany Strategy to win trad How to subtract binary numbers

The binary option EUR/USD>1.2375 is quoted at 60.00/66.00. Since you are bearish on the euro, you would sell this option. Your initial cost to sell each binary option contract is, therefore, $40 Binary Options Trading is as simple as choosing a market, expiration, and strike price. But the way you make those choices involves all the analysis, intuition, and discipline as any other kind of trading. Binary options are another—in some ways better—way to trade your way. **Alerts are only for Indices Binaries, US500, Daily Trades. Binary options are financial instruments that allow you to speculate on price movement of the underlying market (e.g., gold, oil, the dollar, the euro, etc.). There are two possible outcomes if you hold the contract until expiration, which is why they are considered binary: 1. This tool converts negative decimal numbers (and also positive) to the binary numeral system. The binary number system has only two symbols '0' and '1', and unlike the decimal number system, there is no negative sign '-'. Therefore, negative numbers in binary are represented in special binary schemes that encode the minus sign to a bit pattern. binary options for beginners 0.0 (0 ratings) Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.

[index] [24421] [21044] [16158] [16109] [19615] [10171] [10019] [6803] [12248] [6623]

สอน Binary Options สำหรับมือใหม่เริ่มต้นจาก 0 - YouTube

binary option trading, binany payment, binany promo code 2020, ... binary 01, binary 0 minus 1, binary 0s and 1s, binary 011, 01010 binary code, 0 1 binary code, 01001 binary, 0101 binary, This video tutorial explains how to subtract binary numbers. It contains plenty of examples and practice problems on binary subtraction. Subscribe: https://w... -- In this lesson you will learn how to subtract a binary number from any other binary number. Before watching this video, you should have seen "How to add binary numbers" which can be found here: ... สอน Binary Options สำหรับมือใหม่เริ่มต้นจาก 0 Forex - Binary Options สําหรับมือใหม่ 28 videos binary 01, binary 0+0, binary 0s and 1s, binary 0 minus 1, binary 0 and 1 meaning, binary 02, device 1 binary 0, 01010 binary code, 0-1 binary subtraction, binary number from 0 to 15, 0 1 binary code,

Flag Counter