zero binary copy option Solutions | Experts Exchange

Adding cover artwork to CDI disc images for GDEMU/GDMENU

A question came up from u/pvcHook in a recent post about adding artwork to GDI images: can the same be done for games in a CDI format? The answer is yes, and the general process is the same as it is for the GDI games. I've already added all of the appropriate artwork to all of the indie shmup games and all that; can I share those here, or is that a no-no? Because if that's all you're here for it, that would be a lot easier than putting yourself through this process. But it's something to learn, so read on.
First, if you want to do this, you're going to need the proper tools. Someone put together a CDI toolkit (password: DCSTUFF) of sorts on another forum; this is basically the same thing with a few additions and tweaks I've made; before you begin install ISO Buster from the 'isobuster' folder. You will also need the PVR Viewer utility to create the artwork files for the discs. The images you generate will need to be mounted to a virtual drive, so Daemon Tools or some other drive emulation software will also be required. And finally you'll need a copy of DiscJuggler to write your images into a format useable by an emulator or your GDEMU.
EXTRACTION
Here are the general extraction steps, I'll go into a bit more detail after the list:
  1. Copy your CDI image to the 'cdirip' folder in the toolkit and run the 'CDIrip pause.bat' file. Choose an output directory (preferably the 'isofix' folder) and let it rip. You will need to note the LBA info of the tracks being extracted (which is why I made this pause batch file). If only two tracks are extracted, then look closely at the sizes of the sectors that were extracted. If the first track is the larger of the two, then you will not need to use isofix to extract the contents. If the second track is the larger of the two, make note of its LBA value to use with isofix to extract its contents.
  2. Make sure you have installed ISO Buster, you will need it beyond this point.
  3. Go to the 'isofix' folder and you will see the contents of the disc. There will be image files named with the 'TData#.iso' convention and those are what we need to use. The steps diverge a bit from this point depending upon the format of the disc you just extracted; read carefully and follow the instructions for your situation.
  4. If the first track extracted in step one was the larger of the two tracks, open it in ISO Buster and go to step #7.
  5. If the second track extracted in step one was the larger of the two tracks, open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  6. If CDIrip extracted a bunch of wave files and a 'TData#.iso' file, the disc you extracted uses CDDA. Open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  7. In the left pane of ISO Buster you'll see the file structure of the iso file you opened; expand the tree until you see a red 'iso' icon and click on it. This should open up the files and folders within it in the right pane. Highlight all of these files, right click and choose 'Extract Objects'; choose the 'discroot' folder in the CDI toolkit.
Your CDI image is now extracted. Please note that all of the indie releases from NGDEV.TEAM, Hucast.Net, and Duranik use the CDDA format. You'll see the difference when it's time to rebuild the disc image. Also, if you're using PowerShell and not command prompt, the prompts to run the command line utilities are a bit different; you would need to type out '.\isofix' (minus quotes) to execute isofix, for example.
COVER ART CREATION
There are other guides out there concerned with converting cover art files into the PVR format that the Dreamcast and GDEMU/GDMenu use, so I won't go into great detail about that here. I will note, however, that I generally load games up in Redream at least once so it fetches the cover art for the games. They are very good quality sources, and they're 512x512 so won't lose any quality when you reduce them to 256x256 for the GDMenu.
I will say, however, that a lot of the process in the guide I linked to is optional; you can simply open the source file in PVR Viewer and save it as a .pvr file and it will be fine. But feel free to get as detailed as you like with it.
REBUILDING
Once you have your cover art to your liking, make sure it's been placed in the 'discroot' folder and you can begin the image rebuilding process.
We'll start with an image that doesn't use CDDA:
  1. Check the 'discroot' folder for two files: 1ST_READ.BIN and IP.BIN. Select them, then copy and paste them into the 'binhack32' folder in the toolkit. Run the binhack32.exe application in the 'binhack32' folder (you may have to tweak your antivirus settings to do this).
  2. Binhack32 will prompt you to "enter name of binary": this is 1ST_READ.BIN, type it correctly and remember it is case sensitive. Once you enter the binary, you will be prompted to "enter name of bootsector": this is IP.BIN, again type correctly and remember case.
  3. The next prompt will ask you to update the LBA value of the binaries. Enter zero ( 0 ) for this value, since we are removing the preceding audio session track and telling the binaries to start from the beginning of the disc. Once the utility is done, select the two bin files, then cut and paste them back into the 'discroot' folder; overwrite when prompted.
  4. Open the 'bootdreams' folder and start up the BootDreams.exe executable. Before doing anything click on the "Extras" entry in the menu bar, and hover over "Dummy file"; some options will pop out. If you are burning off the discs for any reason, be sure to use one of the options, 650MB or 700MB. If you aren't burning them, still consider using the dummy data. It will compress down to nothing if you're saving these disc images for archival reasons.
  5. Click on the far left icon on the top of BootDreams, the green DiscJuggler icon. Open or drag'n'drop the 'discroot' folder into the "selfboot folder" field, and add whatever label you want for the disc (limited to 8 characters, otherwise you'll get an error). Change disc format to 'data/data', then click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. Choose an output location for the CDI image, and let the utilities go to work. If everything was set up properly you'll get a new disc image with cover art. I always boot the CDI up in RetroArch or another emulator to make sure it's valid and runs as expected so you don't waste time transferring a bad dump to your GDEMU (or burning a bad disc).
If your game uses CDDA, the process involves a few more steps, but it's nothing terribly complicated:
  1. Check the 'discroot' folder for the IP.BIN file. If it's there, everything is good, continue on to the next step. If it's not there, look in the 'isofix' directory: there should be a file called "bootsector.bin" in that folder. Copy that file and paste it into the 'discroot' folder, then rename it IP.BIN (all caps, even the file extension). Now you're good, go on to the next step.
  2. Remember all those files dumped into the 'isofix' directory? Go look at them now. Copy/cut and paste all of those wave files from 'isofix' into the 'bootdreams/cdda' folder.
  3. Start up the bootdreams.exe executable from the 'bootdreams' folder.
  4. Select the middle icon at the top of the BootDreams window, the big red 'A' for Alcohol 120% image. Once you've selected this, click on 'Extras' up in the menu bar and make sure the 'Add CDDA tracks' option is selected (has a check mark next to it).
  5. Open/drag'n'drop the finished 'discroot' folder into the selfboot folder field; put whatever name you'd like for the disc in the CD label field. Click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. A window showing you the audio files in the 'cdda' folder will pop up. Highlight all of them in the left pane and click the right-pointing arrow in the middle of the two fields to add them to the project. Make sure they are in order! Then click on OK. The audio files are converted to the appropriate raw format and the process continues. Choose an output location for the MDS/MDF files.
  8. When the files are finished, find them and mount them into a virtual drive (with Daemon Tools or whatever utility you prefer). Open up DiscJuggler and we'll make a CDI image.
  9. Start a new project in DiscJuggler (File > New, then choose 'Create disc images' from the menu). Choose your virtual drive with mounted image in the source field, and set your file output in the destination field. Click the Advanced tab above, and make sure 'Overburn disc' is selected. Click Start to begin converting into a CDI image.
  10. When DiscJuggler is done, close it down, unmount and delete the MDS/MDF files created by BootDreams, and test your CDI image with RetroArch or another emulator before transferring it to your GDEMU.
If you have followed these steps and the disc image will absolutely not boot, then it's possible that a certain disc layout is required and must be used. I have only run into this a few times, but in this situation you simply need to use the 'audio/data' option for the CDI image in Bootdreams to put the image back together. Please note: if you are going to try to build the image with the 'audio/data' option, then make sure you replace the IP.BIN file in the 'discroot' folder with the original, unmodified bootsector.bin file in the 'isofix' folder. The leading audio track is a set size, and the IP.BIN will be expecting this; remember, the IP.BIN modified by binhack32 changes the LBA value of the file and it won't work properly with the audio/data method.
These methods have worked for me each and every time I've wanted to add artwork to a CDI image, and it should work for you as well. This will also keep the original IP.BIN files from the discs, so it should keep anything that references this information intact (like the cover art function in Redream). If it doesn't, then the rebuilt images with artwork can be used on your GDEMU and you can keep the original disc images to use in Redream or wherever.
Let me know if anything is unclear and I can clean the guide up a bit. Or if I can just share the link to my Drive with the images done and uploaded!
submitted by king_of_dirt to dreamcast [link] [comments]

Vault 7 - CIA Hacking Tools Revealed

Vault 7 - CIA Hacking Tools Revealed
March 07, 2017
from Wikileaks Website


https://preview.redd.it/9ufj63xnfdb41.jpg?width=500&format=pjpg&auto=webp&s=46bbc937f4f060bad1eaac3e0dce732e3d8346ee

Press Release
Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency.
Code-named "Vault 7" by WikiLeaks, it is the largest ever publication of confidential documents on the agency.
The first full part of the series, "Year Zero", comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA's Center for Cyber Intelligence (below image) in Langley, Virgina.
It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election.
Recently, the CIA lost control of the majority of its hacking arsenal including,
  1. malware
  2. viruses
  3. trojans
  4. weaponized "zero day" exploits
  5. malware remote control systems

...and associated documentation.
This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA.
The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.
"Year Zero" introduces the scope and direction of the CIA's global covert hacking program, its malware arsenal and dozens of "zero day" weaponized exploits against a wide range of U.S. and European company products, include,

  1. Apple's iPhone
  2. Google's Android
  3. Microsoft's Windows
  4. Samsung TVs,

...which are turned into covert microphones.
Since 2001 the CIA has gained political and budgetary preeminence over the U.S. National Security Agency (NSA).
The CIA found itself building not just its now infamous drone fleet, but a very different type of covert, globe-spanning force - its own substantial fleet of hackers.
The agency's hacking division freed it from having to disclose its often controversial operations to the NSA (its primary bureaucratic rival) in order to draw on the NSA's hacking capacities.
By the end of 2016, the CIA's hacking division, which formally falls under the agency's Center for Cyber Intelligence (CCI - below image), had over 5000 registered users and had produced more than a thousand,
hacking systems trojans viruses,
...and other "weaponized" malware.


https://preview.redd.it/3jsojkqxfdb41.jpg?width=366&format=pjpg&auto=webp&s=e92eafbb113ab3e972045cc242dde0f0dd511e96

Such is the scale of the CIA's undertaking that by 2016, its hackers had utilized more codes than those used to run Facebook.
The CIA had created, in effect, its "own NSA" with even less accountability and without publicly answering the question as to whether such a massive budgetary spend on duplicating the capacities of a rival agency could be justified.
In a statement to WikiLeaks the source details policy questions that they say urgently need to be debated in public, including whether the CIA's hacking capabilities exceed its mandated powers and the problem of public oversight of the agency.
The source wishes to initiate a public debate about the security, creation, use, proliferation and democratic control of cyberweapons.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by rival states, cyber mafia and teenage hackers alike.

Julian Assange, WikiLeaks editor stated that,
"There is an extreme proliferation risk in the development of cyber 'weapons'.
Comparisons can be drawn between the uncontrolled proliferation of such 'weapons', which results from the inability to contain them combined with their high market value, and the global arms trade.
But the significance of 'Year Zero' goes well beyond the choice between cyberwar and cyberpeace. The disclosure is also exceptional from a political, legal and forensic perspective."

Wikileaks has carefully reviewed the "Year Zero" disclosure and published substantive CIA documentation while avoiding the distribution of 'armed' cyberweapons until a consensus emerges on the technical and political nature of the CIA's program and how such 'weapons' should analyzed, disarmed and published.

Wikileaks has also decided to Redact (see far below) and Anonymize some identifying information in "Year Zero" for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout,
Latin America Europe the United States

While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in "Vault 7" part one ("Year Zero") already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.

Analysis

CIA malware targets iPhone, Android, smart TVs
CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation).
The DDI is one of the five major directorates of the CIA (see above image of the CIA for more details).
The EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA in its covert operations world-wide.
The increasing sophistication of surveillance techniques has drawn comparisons with George Orwell's 1984, but "Weeping Angel", developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization.
The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS.
After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.
As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations.
The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone.
Despite iPhone's minority share (14.5%) of the global smart phone market in 2016, a specialized unit in the CIA's Mobile Development Branch produces malware to infest, control and exfiltrate data from iPhones and other Apple products running iOS, such as iPads.
CIA's arsenal includes numerous local and remote "zero days" developed by CIA or obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop.
The disproportionate focus on iOS may be explained by the popularity of the iPhone among social, political, diplomatic and business elites.
A similar unit targets Google's Android which is used to run the majority of the world's smart phones (~85%) including Samsung, HTC and Sony. 1.15 billion Android powered phones were sold last year.
"Year Zero" shows that as of 2016 the CIA had 24 "weaponized" Android "zero days" which it has developed itself and obtained from GCHQ, NSA and cyber arms contractors.
These techniques permit the CIA to bypass the encryption of, WhatsApp
  1. Signal
  2. Telegram
  3. Wiebo
  4. Confide
  5. Cloackman
...by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.
CIA malware targets Windows, OSx, Linux, routers
The CIA also runs a very substantial effort to infect and control Microsoft Windows users with its malware.
This includes multiple local and remote weaponized "zero days", air gap jumping viruses such as "Hammer Drill" which infects software distributed on CD/DVDs, infectors for removable media such as USBs, systems to hide data in images or in covert disk areas ("Brutal Kangaroo") and to keep its malware infestations going.
Many of these infection efforts are pulled together by the CIA's Automated Implant Branch (AIB), which has developed several attack systems for automated infestation and control of CIA malware, such as "Assassin" and "Medusa".
Attacks against Internet infrastructure and webservers are developed by the CIA's Network Devices Branch (NDB).
The CIA has developed automated multi-platform malware attack and control systems covering Windows, Mac OS X, Solaris, Linux and more, such as EDB's "HIVE" and the related "Cutthroat" and "Swindle" tools, which are described in the examples section far below.
CIA 'hoarded' vulnerabilities ("zero days")
In the wake of Edward Snowden's leaks about the NSA, the U.S. technology industry secured a commitment from the Obama administration that the executive would disclose on an ongoing basis - rather than hoard - serious vulnerabilities, exploits, bugs or "zero days" to Apple, Google, Microsoft, and other US-based manufacturers.
Serious vulnerabilities not disclosed to the manufacturers places huge swathes of the population and critical infrastructure at risk to foreign intelligence or cyber criminals who independently discover or hear rumors of the vulnerability.
If the CIA can discover such vulnerabilities so can others.
The U.S. government's commitment to the Vulnerabilities Equities Process came after significant lobbying by US technology companies, who risk losing their share of the global market over real and perceived hidden vulnerabilities.
The government stated that it would disclose all pervasive vulnerabilities discovered after 2010 on an ongoing basis.
"Year Zero" documents show that the CIA breached the Obama administration's commitments. Many of the vulnerabilities used in the CIA's cyber arsenal are pervasive and some may already have been found by rival intelligence agencies or cyber criminals.
As an example, specific CIA malware revealed in "Year Zero" is able to penetrate, infest and control both the Android phone and iPhone software that runs or has run presidential Twitter accounts.
The CIA attacks this software by using undisclosed security vulnerabilities ("zero days") possessed by the CIA but if the CIA can hack these phones then so can everyone else who has obtained or discovered the vulnerability.
As long as the CIA keeps these vulnerabilities concealed from Apple and Google (who make the phones) they will not be fixed, and the phones will remain hackable.
The same vulnerabilities exist for the population at large, including the U.S. Cabinet, Congress, top CEOs, system administrators, security officers and engineers.
By hiding these security flaws from manufacturers like Apple and Google the CIA ensures that it can hack everyone at the expense of leaving everyone hackable.
'Cyberwar' programs are a serious proliferation risk
Cyber 'weapons' are not possible to keep under effective control.
While nuclear proliferation has been restrained by the enormous costs and visible infrastructure involved in assembling enough fissile material to produce a critical nuclear mass, cyber 'weapons', once developed, are very hard to retain.
Cyber 'weapons' are in fact just computer programs which can be pirated like any other. Since they are entirely comprised of information they can be copied quickly with no marginal cost.
Securing such 'weapons' is particularly difficult since the same people who develop and use them have the skills to exfiltrate copies without leaving traces - sometimes by using the very same 'weapons' against the organizations that contain them.
There are substantial price incentives for government hackers and consultants to obtain copies since there is a global "vulnerability market" that will pay hundreds of thousands to millions of dollars for copies of such 'weapons'.
Similarly, contractors and companies who obtain such 'weapons' sometimes use them for their own purposes, obtaining advantage over their competitors in selling 'hacking' services.
Over the last three years the United States intelligence sector, which consists of government agencies such as the CIA and NSA and their contractors, such as Booz Allan Hamilton, has been subject to unprecedented series of data exfiltrations by its own workers.
A number of intelligence community members not yet publicly named have been arrested or subject to federal criminal investigations in separate incidents.
Most visibly, on February 8, 2017 a U.S. federal grand jury indicted Harold T. Martin III with 20 counts of mishandling classified information.
The Department of Justice alleged that it seized some 50,000 gigabytes of information from Harold T. Martin III that he had obtained from classified programs at NSA and CIA, including the source code for numerous hacking tools.
Once a single cyber 'weapon' is 'loose' it can spread around the world in seconds, to be used by peer states, cyber mafia and teenage hackers alike.
U.S. Consulate in Frankfurt is a covert CIA hacker base
In addition to its operations in Langley, Virginia the CIA also uses the U.S. consulate in Frankfurt as a covert base for its hackers covering Europe, the Middle East and Africa.
CIA hackers operating out of the Frankfurt consulate ("Center for Cyber Intelligence Europe" or CCIE) are given diplomatic ("black") passports and State Department cover.
The instructions for incoming CIA hackers make Germany's counter-intelligence efforts appear inconsequential: "Breeze through German Customs because you have your cover-for-action story down pat, and all they did was stamp your passport" Your Cover Story (for this trip) Q: Why are you here? A: Supporting technical consultations at the Consulate. Two earlier WikiLeaks publications give further detail on CIA approaches to customs and secondary screening procedures.
Once in Frankfurt CIA hackers can travel without further border checks to the 25 European countries that are part of the Shengen open border area - including France, Italy and Switzerland.
A number of the CIA's electronic attack methods are designed for physical proximity.
These attack methods are able to penetrate high security networks that are disconnected from the internet, such as police record database. In these cases, a CIA officer, agent or allied intelligence officer acting under instructions, physically infiltrates the targeted workplace.
The attacker is provided with a USB containing malware developed for the CIA for this purpose, which is inserted into the targeted computer. The attacker then infects and exfiltrates data to removable media.
For example, the CIA attack system Fine Dining, provides 24 decoy applications for CIA spies to use.
To witnesses, the spy appears to be running a program showing videos (e.g VLC), presenting slides (Prezi), playing a computer game (Breakout2, 2048) or even running a fake virus scanner (Kaspersky, McAfee, Sophos).
But while the decoy application is on the screen, the underlying system is automatically infected and ransacked.
How the CIA dramatically increased proliferation risks
In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of "Vault 7", the CIA's, weaponized malware (implants + zero days) Listening Posts (LP) Command and Control (C2) systems, ...the agency has little legal recourse.
The CIA made these systems unclassified.
Why the CIA chose to make its cyber-arsenal unclassified reveals how concepts developed for military use do not easily crossover to the 'battlefield' of cyber 'war'.
To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet.
If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet.
Consequently the CIA has secretly made most of its cyber spying/war code unclassified. The U.S. government is not able to assert copyright either, due to restrictions in the U.S. Constitution.
This means that cyber 'arms' manufactures and computer hackers can freely "pirate" these 'weapons' if they are obtained. The CIA has primarily had to rely on obfuscation to protect its malware secrets.
Conventional weapons such as missiles may be fired at the enemy (i.e. into an unsecured area). Proximity to or impact with the target detonates the ordnance including its classified parts. Hence military personnel do not violate classification rules by firing ordnance with classified parts.
Ordnance will likely explode. If it does not, that is not the operator's intent.
Over the last decade U.S. hacking operations have been increasingly dressed up in military jargon to tap into Department of Defense funding streams.
For instance, attempted "malware injections" (commercial jargon) or "implant drops" (NSA jargon) are being called "fires" as if a weapon was being fired.
However the analogy is questionable.
Unlike bullets, bombs or missiles, most CIA malware is designed to live for days or even years after it has reached its 'target'. CIA malware does not "explode on impact" but rather permanently infests its target. In order to infect target's device, copies of the malware must be placed on the target's devices, giving physical possession of the malware to the target.
To exfiltrate data back to the CIA or to await further instructions the malware must communicate with CIA Command & Control (C2) systems placed on internet connected servers.
But such servers are typically not approved to hold classified information, so CIA command and control systems are also made unclassified.
A successful 'attack' on a target's computer system is more like a series of complex stock maneuvers in a hostile take-over bid or the careful planting of rumors in order to gain control over an organization's leadership rather than the firing of a weapons system.
If there is a military analogy to be made, the infestation of a target is perhaps akin to the execution of a whole series of military maneuvers against the target's territory including observation, infiltration, occupation and exploitation.
Evading forensics and anti-virus
A series of standards lay out CIA malware infestation patterns which are likely to assist forensic crime scene investigators as well as, Apple
  1. Microsoft
  2. Google
  3. Samsung
  4. Nokia
  5. Blackberry
  6. Siemens
  7. anti-virus companies,
...attribute and defend against attacks.
"Tradecraft DO's and DON'Ts" contains CIA rules on how its malware should be written to avoid fingerprints implicating the "CIA, US government, or its witting partner companies" in "forensic review".
Similar secret standards cover the, use of encryption to hide CIA hacker and malware communication (pdf) describing targets & exfiltrated data (pdf) executing payloads (pdf) persisting (pdf), ...in the target's machines over time.
CIA hackers developed successful attacks against most well known anti-virus programs.
These are documented in, AV defeats Personal Security Products Detecting and defeating PSPs PSP/DebuggeRE Avoidance For example, Comodo was defeated by CIA malware placing itself in the Window's "Recycle Bin". While Comodo 6.x has a "Gaping Hole of DOOM".
CIA hackers discussed what the NSA's "Equation Group" hackers did wrong and how the CIA's malware makers could avoid similar exposure.

Examples

The CIA's Engineering Development Group (EDG) management system contains around 500 different projects (only some of which are documented by "Year Zero") each with their own sub-projects, malware and hacker tools.
The majority of these projects relate to tools that are used for,
penetration infestation ("implanting") control exfiltration
Another branch of development focuses on the development and operation of Listening Posts (LP) and Command and Control (C2) systems used to communicate with and control CIA implants.
Special projects are used to target specific hardware from routers to smart TVs.
Some example projects are described below, but see the table of contents for the full list of projects described by WikiLeaks' "Year Zero".
UMBRAGE
The CIA's hand crafted hacking techniques pose a problem for the agency.
Each technique it has created forms a "fingerprint" that can be used by forensic investigators to attribute multiple different attacks to the same entity.
This is analogous to finding the same distinctive knife wound on multiple separate murder victims. The unique wounding style creates suspicion that a single murderer is responsible.
As soon one murder in the set is solved then the other murders also find likely attribution.
The CIA's Remote Devices Branch's UMBRAGE group collects and maintains a substantial library of attack techniques 'stolen' from malware produced in other states including the Russian Federation.
With UMBRAGE and related projects the CIA cannot only increase its total number of attack types but also misdirect attribution by leaving behind the "fingerprints" of the groups that the attack techniques were stolen from.
UMBRAGE components cover,
keyloggers
  1. password collection
  2. webcam capture
  3. data destruction
  4. persistence
  5. privilege escalation
  6. stealth
  7. anti-virus (PSP) avoidance
  8. survey techniques

Fine Dining
Fine Dining comes with a standardized questionnaire i.e menu that CIA case officers fill out.
The questionnaire is used by the agency's OSB (Operational Support Branch) to transform the requests of case officers into technical requirements for hacking attacks (typically "exfiltrating" information from computer systems) for specific operations.
The questionnaire allows the OSB to identify how to adapt existing tools for the operation, and communicate this to CIA malware configuration staff.
The OSB functions as the interface between CIA operational staff and the relevant technical support staff.
Among the list of possible targets of the collection are,
  • 'Asset'
  • 'Liason Asset'
  • 'System Administrator'
  • 'Foreign Information Operations'
  • 'Foreign Intelligence Agencies'
  • 'Foreign Government Entities'
Notably absent is any reference to extremists or transnational criminals. The 'Case Officer' is also asked to specify the environment of the target like the type of computer, operating system used, Internet connectivity and installed anti-virus utilities (PSPs) as well as a list of file types to be exfiltrated like Office documents, audio, video, images or custom file types.
The 'menu' also asks for information if recurring access to the target is possible and how long unobserved access to the computer can be maintained.
This information is used by the CIA's 'JQJIMPROVISE' software (see below) to configure a set of CIA malware suited to the specific needs of an operation.
Improvise (JQJIMPROVISE)
  1. 'Improvise' is a toolset for configuration, post-processing, payload setup and execution vector
  2. selection for survey/exfiltration tools supporting all major operating systems like,
  3. Windows (Bartender)
  4. MacOS (JukeBox)
  5. Linux (DanceFloor)
  6. Its configuration utilities like Margarita allows the NOC (Network Operation Center) to customize tools
based on requirements from 'Fine Dining' questionnaires.
HIVE
HIVE is a multi-platform CIA malware suite and its associated control software.
The project provides customizable implants for Windows, Solaris, MikroTik (used in internet routers) and Linux platforms and a Listening Post (LP)/Command and Control (C2) infrastructure to communicate with these implants.
The implants are configured to communicate via HTTPS with the webserver of a cover domain; each operation utilizing these implants has a separate cover domain and the infrastructure can handle any number of cover domains.
Each cover domain resolves to an IP address that is located at a commercial VPS (Virtual Private Server) provider.
The public-facing server forwards all incoming traffic via a VPN to a 'Blot' server that handles actual connection requests from clients.
It is setup for optional SSL client authentication: if a client sends a valid client certificate (only implants can do that), the connection is forwarded to the 'Honeycomb' toolserver that communicates with the implant.
If a valid certificate is missing (which is the case if someone tries to open the cover domain website by accident), the traffic is forwarded to a cover server that delivers an unsuspicious looking website.
The Honeycomb toolserver receives exfiltrated information from the implant; an operator can also task the implant to execute jobs on the target computer, so the toolserver acts as a C2 (command and control) server for the implant.
Similar functionality (though limited to Windows) is provided by the RickBobby project.
See the classified user and developer guides for HIVE.

Frequently Asked Questions

Why now?
WikiLeaks published as soon as its verification and analysis were ready. In February the Trump administration has issued an Executive Order calling for a "Cyberwar" review to be prepared within 30 days.
While the review increases the timeliness and relevance of the publication it did not play a role in setting the publication date.
Redactions
Names, email addresses and external IP addresses have been redacted in the released pages (70,875 redactions in total) until further analysis is complete. Over-redaction: Some items may have been redacted that are not employees, contractors, targets or otherwise related to the agency, but are, for example, authors of documentation for otherwise public projects that are used by the agency.
Identity vs. person: the redacted names are replaced by user IDs (numbers) to allow readers to assign multiple pages to a single author. Given the redaction process used a single person may be represented by more than one assigned identifier but no identifier refers to more than one real person.
Archive attachments (zip, tar.gz, ...), are replaced with a PDF listing all the file names in the archive. As the archive content is assessed it may be made available; until then the archive is redacted.
Attachments with other binary content, are replaced by a hex dump of the content to prevent accidental invocation of binaries that may have been infected with weaponized CIA malware. As the content is assessed it may be made available; until then the content is redacted.
Tens of thousands of routable IP addresses references, (including more than 22 thousand within the United States) that correspond to possible targets, CIA covert listening post servers, intermediary and test systems, are redacted for further exclusive investigation.
Binary files of non-public origin, are only available as dumps to prevent accidental invocation of CIA malware infected binaries.
Organizational Chart
The organizational chart (far above image) corresponds to the material published by WikiLeaks so far.
Since the organizational structure of the CIA below the level of Directorates is not public, the placement of the EDG and its branches within the org chart of the agency is reconstructed from information contained in the documents released so far.
It is intended to be used as a rough outline of the internal organization; please be aware that the reconstructed org chart is incomplete and that internal reorganizations occur frequently.
Wiki pages
"Year Zero" contains 7818 web pages with 943 attachments from the internal development groupware. The software used for this purpose is called Confluence, a proprietary software from Atlassian.
Webpages in this system (like in Wikipedia) have a version history that can provide interesting insights on how a document evolved over time; the 7818 documents include these page histories for 1136 latest versions.
The order of named pages within each level is determined by date (oldest first). Page content is not present if it was originally dynamically created by the Confluence software (as indicated on the re-constructed page).
What time period is covered?
The years 2013 to 2016. The sort order of the pages within each level is determined by date (oldest first).
WikiLeaks has obtained the CIA's creation/last modification date for each page but these do not yet appear for technical reasons. Usually the date can be discerned or approximated from the content and the page order.
If it is critical to know the exact time/date contact WikiLeaks.
What is "Vault 7"
"Vault 7" is a substantial collection of material about CIA activities obtained by WikiLeaks.
When was each part of "Vault 7" obtained?
Part one was obtained recently and covers through 2016. Details on the other parts will be available at the time of publication.
Is each part of "Vault 7" from a different source?
Details on the other parts will be available at the time of publication.
What is the total size of "Vault 7"?
The series is the largest intelligence publication in history.
How did WikiLeaks obtain each part of "Vault 7"?
Sources trust WikiLeaks to not reveal information that might help identify them.
Isn't WikiLeaks worried that the CIA will act against its staff to stop the series?
No. That would be certainly counter-productive.
Has WikiLeaks already 'mined' all the best stories?
No. WikiLeaks has intentionally not written up hundreds of impactful stories to encourage others to find them and so create expertise in the area for subsequent parts in the series. They're there.
Look. Those who demonstrate journalistic excellence may be considered for early access to future parts.
Won't other journalists find all the best stories before me?
Unlikely. There are very considerably more stories than there are journalists or academics who are in a position to write them.
submitted by CuteBananaMuffin to conspiracy [link] [comments]

GSAT linux live cd (how to easily and safely stress test memory)

Skip to the bottom if you don't care about the technicalities of how this was made.
I stumbled upon this thread over at overclock.net featuring a linux live cd that has GSAT built in. I decided to try and improve upon this despite my very limited linux knowledge and managed to create a fully automatic linux live cd image that automatically runs GSAT once you boot your PC from it meaning you don't have to fear corrupting your windows install when testing memory stability unlike with windows based RAM testers and because it's GSAT it should be atleast as reliable as any windows based utility. Google themselves developed this and use this to test memory along with Asus.
This is how I made this:
I started by downloading a fresh 64bit TinyCore linux image from here (CorePure64-10.1.iso). I also downloaded the image made by ToBeOC and extracted the compiled stressapptest binary from /uslocal/bin (using 7-Zip). Then I extracted boot/corepure64.gz from the clean TinyCore image I downloaded previously and moved that over to a Ubuntu 19.10 virtual machine where I did the following:
  1. Created a new folder (called 123) on my desktop and moved corepure64.gz there and opened a terminal window where I first switched directories to my newly created folder with cd 123 and then switched to root with sudo su.
  2. Extracted corepure64.gz with the following command: zcat corepure64.gz | cpio -i -H newc -d (which I found here)
  3. Opened the file explorer with root permissions by running this command: nautilus
  4. Navigated to /home/useDesktop/123/uslocal/bin in the file explorer.
  5. Copied the stresstestapp binary over to that directory and made it executable by right clicking on it, going to properties, opening the Permissions tab and and checking "Allow executing file as program".
  6. Navigated to /home/useDesktop/123/etc/profile.d and placed a file called gsat.sh which I made there which I also marked as executable just like in the previous step. This is just a text file which you can open in notepad++ and edit if you wish. Make sure to save it with linux file endings (Edit > EOL Conversion in notepad++) if you edit it!
  7. Blanked out /home/useDesktop/123/etc/motd (this step isn't necessary just removes the TinyCore linux motd).
  8. Opened the 123 folder on my desktop again and deleted the old corepure64.gz
  9. Repacked corepure64.gz by running the following command using the terminal window I opened previously that was already in the right directory and running as root: find | cpio -o -H newc | gzip -2 > /home/useDesktop/corepure64.gz
  10. Moved the new corepure64.gz back to Windows.
In Windows I then used UltraISO to open the clean CorePure64-10.1.iso file and there opened the boot directory where I dragged and dropped the new corepure64.gz file replacing the old one. I then opened the isolinux directory and extracted the isolinux.cfg file, opened that in notepad++ and changed prompt 1 to prompt 0 and then moved that back in and replaced the old isolinux.cfg file. Then I simply choose Save As in UltraISO and saved the modified iso file.
The final product is just 15mb in size and can be flashed to any usb drive using Rufus. I tested this iso file in a virtual machine but also on 2 different physical machines once flashed to a usb drive (my main Ryzen rig and an older Intel PC).

You can download the final iso file from here: https://drive.google.com/uc?id=1TyeNihg6bKIrmyNwtJ7Fc3asD7XBnXsq&export=download

Here's how to use it:
  1. Download Rufus and flash the iso file to an empty usb flash drive.
  2. Reboot your PC and enter your BIOS (this is usually done by spamming the DEL key while your PC is booting up).
  3. Make sure secure boot is disabled (probably already is) and that CSM is Enabled. Check your motherboard manual which you can find online or google for more indepth instructions.
  4. Save your changes by pressing F10 after which your PC will reboot. Now you need to access your PCs boot menu which is usually F8 but not always, again check your motherboard manual for the exact key. You can also re-enter your BIOS and look for a boot override option or change your boot order. Pick your usb flash drive and boot your PC from it.
  5. That's it. The stress test will automatically start and you can let it run for as long as you wish. I recommend running it over night for a throughout test but a quick 1 hour test should also suffice. Once you are ready to stop the test press CTRL+C to see the results. If it says PASS that means no errors were detected. If it says FAIL errors were detected and your memory settings aren't stable.
Here's a quick screen capture of what it looks like: https://streamable.com/4v06w
Lastly I want to thank ToBeOC for doing all the heavy lifting. And if anyone reading this has more experience with linux and in particular remastering a TinyCore linux iso by all means release a iso done "right" since this is a just a dirty mash up and the best I managed with my limited skills. I just wanted something that anyone with zero linux experience can use where you don't have to remember any commands just plug a usb stick in and boot from it.
submitted by 4wh457 to Amd [link] [comments]

Blindspot Whitepaper: Specialized Threat Assessment and Protection (STAP) for the Blockchain

BlindSpot™
Stop attacks before ”zero day” and stop the Advanced Persistent Threat (APT)
We live in a dangerous world — our information technology systems face that danger every single day. Hackers are constantly attempting to infiltrate systems, steal information, damage government and corporate reputations, and take control of systems and processes.
Hackers share and use a variety of tools and techniques to gain access to, and
maintain access to, IT systems, including groups and techniques so dangerous
they have their own category - the Advanced Persistent Threat (APT). At the
center of the APT are sophisticated techniques using malware to exploit vulnerabilities in systems. Traditional cyber security technologies use file signatures to locate these tools and hacker malware, but hackers are now actively camouflaging their tools by changing, customizing, and “morphing” them into new files that do not match any known signatures (‘Polymorphic Malware’). This introduces a massive gap in malicious file detection which leaves the enterprise open to exploitation — and it’s just not possible for traditional signature-based systems to keep up. In fact, signature-based anti-virus and anti-malware systems are only around 25% effective today. BlindSpot™ sees through it all, even as the files morph and change in a futile attempt to remain camouflaged.
Digital File Fingerprints
Any File Type, Any Language, Partial Matches, Exact Matches
BlindSpot™, the adaptive security solution from BlindSpot™, can see through the
Polymorphic camouflage used by the worlds most advanced hackers by utilizing
digital file fingerprints and our proprietary adaptive BlindSpot™ ‘brain’ that constantly analyzes the fingerprints of known malicious files and tools to locate partial matches within the files on your systems - servers, laptops, desktops, USB drives, and even mobile devices. BlindSpot™ can cut right through the Polymorphic files, revealing the true hacking tools underneath, even if they are only fragments or pieces of a more complete set of hacking tools and technologies.
Most cyber attacks happen weeks or even months after their initial penetration and access to a network or system, and even the simplest attacks tend to have a fuse that is typically several days. It takes them time to map out a system, probe for the information they want, and obtain or forge credentials with the type of access they need. But from the moment their tools first land on your network and systems, BlindSpot™ sees them. If fact, BlindSpot™ can see them sitting on a newly inserted USB drive even if the files are not copied to your systems. This means BlindSpot™ can identify and alert you to malicious files and potential illicit activities before the attack happens - before zero day!
How does BlindSpot™ work? BlindSpot™ sits on the endpoint and continuously monitors file activity. Digital fingerprints, which can be used to find partial matches of any file type in any language, are reported back where they are kept forever in a temporal repository.
BlindSpot™ looks through all of the digital fingerprints — both those from files on your systems and those in a constantly updated database of known malicious files and hacking tools, to locate and alert you to any indication of hacking, malicious files, or illicit activity. BlindSpot™ is a disruptive technology that can see polymorphic malware and stop attacks before zero day.
Digital File Fingerprints are created from a file or a piece of digital data/information by using advanced mathematics to look at all of the small pieces of data that make up the file to create a very small, unique piece of mathematical data — a digital file fingerprint. Files may be of any file type and in any language - digital fingerprints can find partial and exact matches regardless of what is in the file itself.
Just like with humans, once a fingerprint has been taken, you no longer need the
person to identify them. The fingerprint is enough. Even a partial fingerprint is
enough, and sometimes a smudge will do. Digital fingerprints work on the same
principle. Once BlindSpot™ has taken a digital fingerprint of a file, the file is no longer needed to identify it or to compare it with other files. And because digital fingerprints are tiny, they are easy to store. Even a multi-gigabyte file has a digital fingerprint that is no larger than 10k bytes.
Once you have two sets of digital fingerprints, you can compare them. Because BlindSpot™ starts with full fingerprints of known malicious files, it can identify matching files even when the digital fingerprint is only partially there. And with BlindSpot™’s advanced processing capabilities, file fragments, recovered data from a hard drive, partially downloaded documents, damaged files (both intentional and accidental) and other incomplete file structures can be properly fingerprinted in a way that still allows matches to be found.
Other technologies and software use static signatures, which do not work if any part of a file, regardless of how small, is different from another, or if the file is damaged in any way. BlindSpot™ and digital fingerprints enable partial matching, and can see through the camouflage that has become the industry standard for hackers across the globe. Static signature based solutions simply cannot do this.
Imagine your favorite detective drama on TV. The prosecutor says “This partial
fingerprint was found at the crime scene and the video camera across the
street recorded a perfect image of the person’s face.” The jury deliberates and
compares the picture and fingerprints of the defendant that were taken the day
before. They conclude, because the fingerprint was not all there and was not 100% identical, and because one picture showed a mustache that looked identical but was one millimeter longer than the other picture, that the two people were not identical - and set the criminal free. Well, that show wouldn’t be on TV long because crime would run rampant. Now imagine they had BlindSpot™. Criminals would be caught, the town would be a much safer place, and the show would be on for years to come.
Now imagine your network and systems without BlindSpot™, where traditional
exact match signature software is on your front line of defense. All kinds of
malicious files could walk right through and sit down on your hard drives, just
waiting for hackers to activate them. But you don’t have to imagine what your
systems would be like with BlindSpot™ — instead, simply contact us, get BlindSpot™ in place, and we’ll work with you to show you what’s really on your systems and help you keep those systems safe.
Ensuring System Compliance
Take the guesswork out of compliance assessment
All Government systems go through Certification and Accreditation. BlindSpot™ can help you with malicious code protection, for both security considerations and required compliance. Guidelines found in NIST 800-53 Revisions 3+ Security Requirements for System Integrity, SI-3 Malicious Code Protection, state that malicious code protection mechanisms must be employed at information system entry and exit points, including workstations, notebook computers, and mobile devices, to detect and eradicate malicious code.
BlindSpot™, with its continuous monitoring of the files on your endpoints and its
continuous updating of its known malicious file repository, will provide the
required real-time and full monthly re-scans of your files, will alert your
administrative staff when malicious code is found, will provide reports on
potential malicious files, illicit activity, and follow-up with very short false positive reports. BlindSpot™’s false positive rate is less than 0.01%. BlindSpot™ helps organizations meet the security requirements set forth and ensure compliance.
Intellectual Property Protection
Track sensitive information as it changes and moves around the enterprise
BlindSpot™ uses digital file fingerprints to identify partial and exact matches between files, regardless of file type or language. This ability can be used to track movements of and changes to files on a network of computers.
Government entities and corporations need to addresses the issue of monitoring
documents and files that contain sensitive information intellectual property, and it
is no longer sufficient to simply store them on a secure server and require specific credentials to access the information. People, both unintentionally and sometimes with malicious intent, copy and paste parts of documents, move files to USB drives, and otherwise edit and transfer files in order to get them on to a laptop, share them with a co-worker, or exfiltrate confidential information to outside networks and systems. BlindSpot™ carefully watches all of the files on your network, including what’s going with USB drives. If someone copies part of a file that has sensitive data to another file, BlindSpot™ sees it. Furthermore, BlindSpot™ can alert you when it sees questionable activity with certain documents/files or with specific computers/individuals.
Your sensitive files now have a watchdog that catches both unintentional and
malicious exposure to non-secure systems. Use BlindSpot™ to set up a custom
database of the locations where your sensitive files are stored, and BlindSpot™ will create a set of digital file fingerprints that can be used to track those files across your network and systems. This ensures that an organization can know where its proprietary and sensitive information is 365/7/24, in real-time.
Supervisory Control and Data Acquisition (SCADA) Systems
Supervisory Control and Data Acquisition (SCADA) is a system for remote monitoring and control that operates with coded signals over communication channels (using typically one communication channel per remote station).
SCADA networks contain computers and applications that perform key functions in providing essential services and commodities (e.g. electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. They are part of the nation’s critical infrastructure, provide great efficiency, are widely used, and require protection from a variety of cyber threats.
One of the most significant threats is benign files residing on the computers on
the network that morph into tools that hackers can use to gain access to the
network and the equipment it monitors and/or controls. These files might be part
of the operating system (binary files), might be a normal file that includes
scripting, or can even be a general data file moved onto the computer through a
network or a USB drive. By morphing, these files circumvent detection and
countermeasures. This is just one example of how a hacker can compromise and
exploit the system and the worst part is that you will never know until it is too late!
The recent Department of Justice announcement charging Iranian hackers
believed to be tied to the 2013 hacking of a New York dam illustrates this threat
clearly.
Enter BlindSpot™’s BlindSpot™ Adaptive Security — BlindSpot™ monitors all files of all types (any format or language) without the requirement of a translator or human operator. BlindSpot™ can see right through the hacker’s camouflage of
morphing files to quickly identify problems and threats before hackers have the
opportunity to active and use their tools. For U.S. and foreign based systems,
BlindSpot™ is a must have cyber security solution.
The BlindSpot™ team has extensive experience with SCADA systems and critical infrastructure. Our BlindSpot™ solution is critical to the overall security framework of such systems as it was designed to find the morphing, malicious files and associated illicit file activity that can lead to compromise of the integrity, confidentiality and/or availability of the system. Threats loom on both the inside and outside, and the dynamic nature of these systems require continuous, temporal monitoring to stop cyber attacks before they happen.
Stop Ransomware
Identify and remove Ransomware before it encrypts your files
Ransomware attacks are on the rise and affect Fortune 500 companies, Federal
organizations, and consumers. This vicious type of attack affects your user’s ability to get their work done and prevents users from accessing files on a device or network by making the device or network unusable, by encrypting the files your users need to access, and/or by stopping certain applications from running (e.g. the web browser). A ransom is then demanded (an electronic payment of currency or bitcoins) with the promise that your data will be unencrypted and accessible again following the payment.
If the ransom payment is made, there is no guarantee that the data will be
unencrypted or returned to a state of integrity and/or availability. Furthermore,
there is also no guarantee that the people behind the ransom will not re-infect
your systems again with a variant of what was initially used. Payment encourages future attacks because they know you cannot detect it and will pay again next time. Surprisingly, there are only a handful of known ransomware files in use today (e.g. Crowti, Fakebsod). Safeguards exist that use static signatures to find exact matches for these known files, but the moment these files morph or are changed in any way they become undetectable by these solutions. BlindSpot™ digs deeper with digital file fingerprints and can find the new files, enabling you to analyze, quarantine, or delete them before they activate. This pro-active approach can be the difference between a system being protected and a system being made completely unavailable with encrypted data being held hostage for a ransom. The image below is an actual Fakebsod notification message.
BlindSpot™ uses digital file fingerprints to detect the ransomware by looking at
both partial and exact matches and can report the problem before it happens.
Ransomeware of the past attacked your personal computer and today’s variant
attacks the servers — BlindSpot™ can detect both.
Case Study: March 2016 - Two more healthcare networks are hit by ransomware targeting servers. Advice from law enforcement — pay the ransom! (They did). File backups are insufficient. Paying ransoms is costly and only encourages repeat attacks.
BlindSpot™ is the most comprehensive solution available to detect and root out
ransomware. Take charge of the situation and put BlindSpot™ to work continuously monitoring your systems.
Get BlindSpot™ Now
Commercial or Government, with multiple contract vehicles available
How Can I Get BlindSpot™?
CYBR develops and sells its adaptive enterprise cyber security software product, BlindSpot™, and provides professional services and support for BlindSpot™ implementations.
Product
BlindSpot™ Adaptive Security is a continuous monitoring enterprise solution that tracks file-based activity on the endpoint using digital file fingerprints, can identify problems and cyber threats before zero day, and can see through morphing, camouflaged (polymorphic) files to make accurate determinations of malicious files and illicit activity.
Deployment Options
BlindSpot™ can deployed as a secure cloud application for maximum flexibility, a standalone Enterprise implementation for maximum security, or the two combined in an Enterprise implementation augmented through a secure cloud gateway.
Professional Services and Training
BlindSpot™’s team of cyber security experts have the expertise to support
you by creating a holistic, enterprise security framework that consists of people,
policy, procedures and technology that will ensure a security posture that implements the best risk management strategies, tactics and operations available.
Email us at [[email protected]](mailto:[email protected]) for more information.
BlindSpot Solution Brief
June 29, 2018
POC: Shawn R. Key CEO, President
[[email protected]](mailto:[email protected])
Executive Summary and Estimated Pricing
CYBR’s BlindSpot is an enterprise cyber security solution that pro-actively identifies unknown and known malicious files and circumventive activity on endpoint devices. It is designed to interact with the CYBR Ecosystem and associated Web Portal. Distributed clients serve as the connection to the various BlindSpot server tiers.
BlindSpot identifies Illicit File Activity (IFA) and associated hacker activity via perceptive, industry standard algorithms. BlindSpot identifies exact AND similar files regardless of file type and/or language. This applies to ALL file types (e.g. documents, images, audio and video, carrier, etc.). Currently implemented safeguards and counter measures (such as anti-virus (AV), content filters and malware analysis tools) cannot address polymorphic/adaptive files and emerging threats. This introduces a massive gap in illicit file detection and leaves the enterprise open to exploitation. BlindSpot fills that void.
Additionally, corporations and government entities have a need to address known files and associated activity with regards to content and data management. The uncertainty of Intellectual Property (IP) location and propagation poses significant risk to the organization. The ability to identify the life cycle of a file (origin, source, destination, attributes and proliferation) ensures an organization knows where its proprietary, sensitive and privacy information is 365/24/7, in near real-time.
BlindSpot, is significantly different from solutions in the emerging Specialized Threat Assessment and Protection (STAP) marketplace, as it scales to meet the needs of enterprise organizations and the commercial marketplace. BlindSpot’s proprietary database consists of millions of unique, digital identifiers (hash values) that identify exact AND similar, modified files. This ensures that files existing in their original state or those which have been intentionally modified, do not circumvent detection. Our algorithms ensure near zero false positive return rates. The combinatory effect and the rare expertise of our executives and development thwarts potential competition as BlindSpot is an enterprise solution; not a tool.
The enterprise solution is provide as a license per IP address with associated appliance and/or server hardware requirements.
CYBR BlindSpot Technical Deep Dive
CYBR’s BlindSpot product is currently available as a Software as a Service) (SaaS) deployment blockchain solution and will be available as a full enterprise-install by Q2 2019. In both implementations, end-point agent software monitors the hard drive(s) of a computer or server, analyses any files that change, and reports [multiple] file hashes back to the main system. This enables the main system to effectively monitor which files could be malicious or represent intellectual property on the computers and servers within the customer’s network. By using fuzzy hashing algorithms, the system can detect polymorphic malware and intellectual property that has been partially hidden or obfuscated.
Applications
End-point (client) agent: native to each major OS as a fat client. Currently we have end-point agents for Microsoft Windows-based systems using MS .NET c# 2.0/4.5 and C++, although the c# portion will be replaced with all c++ code to increase scalability, efficiency, and security, in Q1 2016. End-point agents for Mac OS (written in Objective-C) and popular Linux platforms (written in c++) will ship in Q1/Q2 2016. Development work on the CentOS linux agent will begin in December 2015.
The Control Application enables system administrators to configure each end-point agent, the system itself, and to actively monitor and access reports on files that have been identified by the system as problematic or of interest. At this time the Control Application is able to provide configuration and monitoring services but is not yet ready for customer on-site deployment and is therefore only available in a SaaS model.
The middle-tier of the system, the Portal sever, currently runs in MS .NET and is written in c#. This tier will be upgraded to a full c++ implementation to increase scalability, efficiency, and security, in Q1 2016, and will run as a standard web server extension on a Linux platform (CentOS/Apache).
The data-tier of the system currently is running in MS SQL Server 2008/2012 and uses transact-SQL tables, but does not use any stored procedures or transactions. Although this tier is sufficient for scalability through mid to late 2016, a no-SQL version of the data tier will be developed in 2016.
The Crush server (hashing services) currently runs on MS Server 2008/2012, is written in c#/c++ and is a) being ported to run as a (c++) daemon on a standard Linux (CentOS) server, and b) being re-engineered to function as a massively parallel application (c/c++) running on NVIDIA Tesla GPU accelerated systems. The Crush server communicates with the data-tier directly and the C2 server indirectly. Multiple Crush servers can run simultaneously and are horizontally scalable and fault-tolerant.
The C2 (Command and Control) server, written in c# and being moved to c++, communicates with the data-tier directly and the Crush server and Control Application indirectly to provide scheduling, system health and integrity, and prioritization services, as well redirecting jobs to maintain fault tolerance of the back-end server components. Multiple C2 servers can run simultaneously and are horizontally scalable.
Hardware and Network:
The basic architecture of the system has two different stacks of software. First, a typical 3-tier approach isolates data storage from end-point and Control Application access with a middle-man protocol altering Portal server. In the SaaS model, the end-point and Control Application software reside on-site with the customer, and the remaining stack components reside at the SaaS hosting datacenter. The second stack consists of multiple horizontally-scalable server components that run entirely in the backend as daemons and interact primarily through the data area to provide the services that are being marketed and sold to the customers. The two stacks are kept somewhat separate from each other in order to buffer one against the other in times of extreme load and for enhanced security.
Following is a description of each software module in the system and how it relates to the others:
The system has one component for data collection (the end-point agent software, which resides on the desktop computers and servers within a deployed customer site), one component for system administration (the Control Application, which resides on a desktop computer that the customer has access to or that an analyst can access through the SaaS system), and a collection of software processes/daemons and a data storage area that comprise the back-end.
The end-point agent collects data from the end-point computer, passes it to the Portal server, which in turn stores it in the data area.
The C2 server monitors the in-flow of data from the end-points, and tasks the Crush server(s) to analyze the data and compare it to databases of known good, known bad, and watch list files, in an efficient manner.
The C2 server also provides notification to the customer of any problematic or watch-list files following the completion of the Crush server tasks.
The Crush server monitors the data area, and performs batch or real-time processing of data as instructed to by the C2 server.
Technology
CYBR’s BlindSpot software is a commercially available product that combines a small footprint end-point agent with a centralized monitoring and management system to track files and file changes on the end-point using partial-match digital fingerprints rather than rigid full-match-only file signatures. As files and data buffers are created, edited/altered, and moved either through the network or via removable media devices including USB drives, the product uses its unique and proprietary technologies in combination with industry standard technologies to identify and locate both known malware and unknown [polymorphic] malware on end-points that are continuously monitored by the product. Staff is notified, depending on the urgency or type of digital fingerprint identified, through integrations with 3rd party SIEM solutions, email/SMS transmissions, and reports that are available using the central management system. A false positive rate of partial digital fingerprint matching of ~1 in 10-12 means staff will not be bombarded with unnecessary alerts, maintaining staff efficiency.
Overview: Traditional anti-malware products use static file signatures to locate known malware but have no means of detecting unknown malware, CYBR’s product uses digital file fingerprints that can identify both partial file matches as well as full file signature matches and in doing so can locate and identify both known and unknown malware within the deployed enterprise. A combination of industry standard and publicly available algorithms and CYBR’s own proprietary algorithms, trade secrets, methods, optimizations, and intellectual property for which a patent is currently pending (which is owned solely by CYBR) are combined to form a comprehensive anti-malware platform and continuous end-point monitoring product that is completely unique in the marketplace. Through the use of our proprietary algorithms and optimizations, the product has the ability to scale to the enterprise level and can track desktops/servers as well as mobile/phone/tablet/Internet of Things (IoTs) devices.
Project Implementation: The implementation of this product would include both the commercially available BlindSpot product as well as prototypes of integration packages to connect with the on-site Security Information and Event Management (SIEM) and other systems and prototypes of end-point agents running on operating systems that are not yet available in the currently available version of the product. Both the integration and end-point agent prototypes would be based on existing modular code/functionality and would extend functionality past the currently available modules to ensure the full needs and requirements of the project are met. A full version of BlindSpot would be deployed on servers at/on the enterprise site, and prototypes of both SIEM integrations and new end-point agents would be deployed to augment the full production system. Information flow between all areas of the full system and prototypes would be tested and verified with increasing scale to ensure the level of performance required is available prior to the completion of the project.
End-point Agents: Each end-point is installed with native low-profile proprietary agent software that minimizes both its file system footprint and CPU use. The current product has a native end-point available for Microsoft Windows OSs (both desktops/tablets and servers) in production, and has native end-point agents in development/prototype stage for iOS, Android, MacOS, and RHEL/CentOS, with additional popular Linux derivatives to follow. The main job of the end-point agent is to communicate with the OS and monitor the file system for any changes in files that occur. When changes are detected, a digital file fingerprint of the file is taken and reported to the centralized data store, or cached until a later time if the centralized data store is unreachable (e,g, no cell coverage, laptop not connected to internet). The agent normally runs in “stealth-mode” and uses minimal CPU, RAM, and file system footprint so as not to disrupt the end-user’s workflow or impact system performance. Taking a digital fingerprint of a file and reporting it is very fast and thus the main job of the end-point agent is not system resource intensive. The “heavy lifting” is done on the back-end and does not burden the users or the end-point devices. Configuration of each end-point agent is conducted through the centralized management system, and changes in configuration are transmitted to the end-point agent within a few seconds (provided there is network connectivity).
Central Data Store: A collection of databases on the back end store file watch lists, known good and known bad digital file fingerprints (whitelists and blacklists containing digital file fingerprints of known malware), priority lists and configurations, end-point configurations, last-seen lists, and the full temporal accounting of all digital file fingerprints reported by end-point agents. As new threats are identified they are added to the central data store. As files on end-points change or are edited, their new digital fingerprints are added to the central data store as well. As new threats are identified though polymorphic partial matching, they are added to the known bad list as well.
Identification of Known and Unknown Malware: By comparing the databases of digital file fingerprints of known malware and digital file fingerprints of files on end-points, the product’s Crush server(s) use sophisticated algorithms to compare the partial digital file fingerprints, regardless of content of the files themselves. The product looks at the raw data (bytes) in the files when creating the digital file fingerprints and as such all file types/formats/languages are handled. This means that all file types and data in any and all languages can be compared with similar files. Binary DLLs, MS Word documents and spreadsheets (MS Excel, csv, …), JPEG images, Javascript, HTML, Executable files (.exe) — all of these files are handled by the product and known/unknown malware within them can be located using the digital file fingerprints in the centralized data store and Crush server’s analysis.
Scale, System Throughput, and Priority: A single Crush server can serve a small enterprise (100s or 1,000s of end-points), and a horizontally scalable array of Crush servers can be used to provide identification of malware for large enterprises. Similarly, databases in the central data store can be split and maintained/mirrored on several servers or run in a monolithic configuration. This makes the system highly scalable and able to be adapted to enterprises of varying sizes/scales while maintaining a good price/performance ratio. Priority lists can be designated for Crush servers such that high-priority end-points and/or high-priority malware fingerprints can be compared and identified in real-time, and similarly, low-priority lists (e.g. malware fingerprints that have not been seen in months or years) can be run in the evenings or when the system is running below normal load to ensure both immediate analysis of high-priority threats and comprehensive analysis of low-priority threats.
Integration: Several modular integration points within the product enable the straight-forward integration with 3rd party SIEM software and other reporting/management tools and systems. Distinct “notification channels” within the product are used based on the type of threat detected, the priority level of the specific threat detected, the confidence of the match (low percentage match of digital fingerprint vs high), and the location of the match (specific end-point list). Each notification channel has integration points that can be linked in with 3rd party systems so that staff are notified using software and procedures they are already familiar with and trained on (i.e., through a SIEM solution that is already begin monitored by dedicated, trained staff). Prototypes of each specific integration would need to be developed as a part of this project to match/communicate with the exact SIEM (or other) system that is in use at the deployment site in the mannemethod desired. Such a prototype would be developed for the purpose of evaluating the technical interconnectivity between systems to meet the requirements of the deployment, and following the prototype testing period, would be load-tested and stress-tested to ensure it’s performance meets the demands of a highly scalable environment, leading to a mature integration over a period of 3-6 months following the initial prototype period of 1-3 months.
Technology Section Summary: With end-points being continuously monitored by the product, both known and unknown malware threats delivered by the network and removable media will be detected and reported through SIEM system integration and direct email/SMS messages with minimal impact to the end-point (on all major OSs, including desktop and mobile). Centralized management and temporal monitoring of digital fingerprints enables the system to proactively locate and identify malware threats before zero day as well as enabling the staff to conduct their own investigations of systems either in the present or the past for forensic investigations. This makes CYBR’s BlindSpot a complete product that reaches all of the end-point devices to ensure safety and security from all types of malware threats.
Defense Utility
The blockchain’s cyber security posture will be greatly enhanced by BlindSpot. CYBR’s executive team works with various military and federal organizations and has a deep understanding of the cyber security challenges that face the enterprise today including advanced persistent threat (APT), polymorphic and pleomorphic malware, zero day attacks and the need to locate white and black files in real time. These threats have now permeated to the blockchain and must be secured.
Company and Customers
The proposed team includes CYBR, Inc. executive management and staff. The company is a works closely with its sister company, 21st Century Technologies, Inc. (21CT), which is a HUBZone certified, Small Business entity. 21CT serves as a value added reseller (VAR) for CYBR, Inc. and is currently a teammate on the DOMino classified DHS contract as a subcontractor to Raytheon.
Existing, paying customers include Stratford University, Test Pros and Devitas. The company also has integrator and VAR partner relationships with Anomali (formerly Threatstream), Lockheed Martin (Cyber and Space) and various commercial entities, which the company believes will become paying customers in 2019.
Transition and Commercialization
Our technology is a commercially available product and commercial sales have been made. The company is actively working to scale this solution to hundreds of thousands of users, which the company has deemed do-able and is in the process of horizontally scaling.
Data Rights Assertions
CYBR, Inc. currently holds a provisional patent and incorporates other trade secrets into the solution. No unreasonable restrictions (including ITAR) are placed upon the use of this intellectual property with regards to global sales.
submitted by CYBRToken to u/CYBRToken [link] [comments]

Lore v2 QT on Raspberry Pi

Hello,
 
To follow up to mindphuk's excellent piece on building the headless client on Raspberry Pi (https://www.reddit.com/blackcoin/comments/6gkjrw/wip_blackpi_a_stake_device_based_on_raspberry/), I thought if anyone was interested I'd show you how to get the full QT version running on the Pi on the Jessie with Pixel desktop. This works and has been soak tested for several days now on a standard Raspberry Pi 3. I have since added some coins and it stakes a handful of times a day.
 
Running staking Lore clients paves the way for some of the future use cases of BLK utilising the Bitcoin 0.12 (and newer) core tech, including colored coins. So I'm going to leave this one going indefinitely to kickstart the number of Lore clients staking. It's certainly not mandatory but it will be good in the longer term to have a nice distribution of Lore staking clients.
 
The cross-compile which lets you create binaries for multiple platforms didn't work for the QT version on the Pi, so there is more to do than just running the binary unfortunately, as below. There are folks working on some much cleaner solutions than this for the Pi, with a custom front end, and where you won't have to do any mucking about. That is coming soon. In the meantime, if you enjoy a fiddle with such things, here's how to get this QT client working on your Pi.
 
These instructions assume you are starting from scratch with a completely blank OS.
 
Download Jessie with Pixel from: http://downloads.raspberrypi.org/raspbian/images/raspbian-2017-07-05/2017-07-05-raspbian-jessie.zip
 
Note they have since (August 2017) released a version called 'Stretch' which does not work with this guide. I'll see if I can come up with something new for that at some point and link to it here when I have. In the meantime the guide should work with the Jessie image above.
 
Unzip the file and extract the .img file to burn it onto Fresh SD card to boot from (to be safe, use 16GB or larger), using a tool like win32diskimager or Etcher.
 
Assuming you have keyboard/mouse and monitor plugged into your pi, boot it up and the Jessie Desktop will show.
 
Before we do anything else, you should increase the default swap size on the pi, as compiling certain libraries can exhaust the RAM and get stuck otherwise. To do this, launch a Terminal window and type:
 
sudo nano /etc/dphys-swapfile 
 
and Change the CONF_SWAPSIZE from 100 to:
 
CONF_SWAPSIZE=1024 
 
Exit nano with control + x to write out the file.
 
Then, run the following to restart the swapfile manager:
 
sudo /etc/init.d/dphys-swapfile stop sudo /etc/init.d/dphys-swapfile start 
 
Now, launch the browser and download the Lore 2.12 binaries for ARM here: https://mega.nz/#!k2InxZhb!iaLhUPreA7LZqZ-Az-0StRBUshSJ82XjldPsvhGBBH4 (Version with fee fix from 6 September 2017)
 
(If you prefer to compile it yourself instead, it is possible by following the instructions in the original article by Mindphuk just taking into account this is the newer version of the Lore client than when that was written (https://github.com/janko33bd/bitcoin/releases) and the versions of Boost and the Berkeley DB need to be the same as below.)
 
Double click the zip and extract the Lore binary files. Yes, at the moment they are all called 'bitcoin', not 'blackcoin' or 'Lore' - this is because the code derives from a recent bitcoin core implementation so this has not yet been updated. You can place these wherever you like.
 
In the Terminal window, change directory to where you put the binaries, e.g.:
 
cd Downloads/lore-raspberrypi-armv7-jessie-pixel chmod +x * 
 
That marks the binaries as executable.
 
Now, we need the Boost libraries installed for any of the Lore binaries to work. The project was done with Boost 1.62.0. Unfortunately the Jessie repository only goes up to 1.55, so we need to download and build 1.62 manually on the device.
wget https://sourceforge.net/projects/boost/files/boost/1.62.0/boost_1_62_0.tar.gz/download tar -xvzf download cd boost_1_62_0 sudo ./bootstrap.sh sudo ./b2 install 
 
(This will take almost 2 hours. Have a nice cup of tea and a sit down.)
 
When I came to run the binaries, I found they couldn't find Boost. Running this command fixes that:
sudo ldconfig 
 
Now we are going to install the packages which aren't already included in the default OS installation which the binaries need in order to run:
sudo apt-get install qrencode libprotobuf-dev libevent-pthreads-2.0-5 
 
Now we need to install the Berkeley Database version 6.2.23. This is the version Lore v2 uses. Bitcoin still uses 4.8 which is 10 years old! This doesn't take too long.
wget http://download.oracle.com/berkeley-db/db-6.2.23.tar.gz tar -xvzf db-6.2.23.tar.gz cd db-6.2.23/build_unix ../dist/configure --prefix=/usr --enable-compat185 --enable-dbm --disable-static --enable-cxx 
 
I find this next section of the Berkeley instructions worked better just switching to root, which can be fudged by running sudo su before the rest:
sudo su make make docdir=/usshare/doc/db-6.2.23 install chown -v -R root:root /usbin/db_* /usinclude/db{,_185,_cxx}.h /uslib/libdb*.{so,la} /usshare/doc/db-6.2.23 
 
Now we're going to go up a couple of directories to where the binaries were:
cd ../.. 
 
Then run the client!
./bitcoin-qt 
 
And there you have it. Should hopefully end up looking a bit like this: http://imgur.com/a/eEHGa
 
Using the Bootstrap can save a while syncing. Download it at: https://www.reddit.com/blackcoin/comments/6b3imq/blackcoin_bootstrapdat_up_to_block_1631800
 
Place the bootstrap.dat file into the ~/.lore directory.
 
Run ./bitcoin-qt again, it will say 'Importing Blocks' rather than 'Synchronising with Network'. My pi sync'ed fully in about 5-6 hours.
 
If you want peace of mind that Lore will always start on bootup into the Jessie w/Pixel desktop (i.e. after a power cycle), then you need to create a .desktop file in the following place.
sudo nano ~/.config/autostart/Lore.desktop 
 
And in it, enter the following (tailoring the Exec line below to the whereabouts of your bitcoin-qt file):
[Desktop Entry] Name=Blackcoin Lore Comment=Mining without the waste Exec=/home/pi/Downloads/lore-raspberrypi-armv7-jessie-pixel/bitcoin-qt Type=Application Encoding=UTF-8 Terminal=false Categories=None; 
 
Power usage and payback time
 
After a good while leaving it going by itself, the CPU load averages got down to almost zero, all of the time. Idling, the Pi uses a bit less than 3 watts. This means it would take two weeks to use one 1Kw/h of electricity.
 
If you pay e.g. 12.5 cents a unit, that's what you'd expect this to cost to run in a fortnight. That's around $0.25 a month or $3 a year. Green and cheap and helping to secure the BLK network. I paid for the year's worth of electricity in 2 days staking with 25k BLK. Makes mining look silly, huh? ;)
 
Securing your Pi
 
With staking, your wallet needs to be unlocked and as such, the keys to your wallet are on the device. In a clean and newly installed environment as described above, and if you don't allow others to use your device and there is no other software or nasties running on it, there is no real cause for concern. However, there are some basic security precautions you can take.
 
Firstly, if you have enabled SSH and are playing with your pi across your LAN (or worse, the Internet), you should immediately change the password for the default 'pi' user (which is preconfigured to be 'raspberry'). Simply log in as normal, then type:
 
passwd 
 
You'll be prompted to enter the old and the new passwords.
 
Security by default
 
Your Pi is likely, by default, to not be exposed to incoming connections from the outside world because your router is likely generating a private address range for your LAN (192.168.x.x or 10.0.x.x or 172.x.x.x) which means all incoming connections are effectively blocked at the router anyway unless you set up a 'port forward' record to allow packets arriving on certain ports to be forwarded to a specific internal IP address.
 
As for accessing your Pi across the internet, if you have set up a port forward, this likely has security ramifications. Even basic old fashioned protocols have proven in recent times to have uncaught flaws, so it's always advisable to lock down your device as much as possible, and even if you only plan to access the Pi over your LAN, install a firewall to configure this. I used one called ufw, because it's literally an uncomplicated firewall.
 
sudo apt-get install ufw sudo ufw allow from 192.168.0.0/16 to any port 22 sudo ufw --force enable 
 
This allows just port 22 (SSH) to be open on the Pi to any device on my LAN's subnet (192.168.0.x). You can change the above to a single IP address if paranoid, or add several lines, if you want to lock it down to your LAN and a specific external static IP address (e.g. a VPN service you use). To find out what subnet your router uses, just type:
 
ifconfig 
 
and you'll see on the interface you are using (either hard wired or wifi) the 192.168 or 10. or 172. prefix. Change the above rule so it matches the first two octets correctly (e.g. 10.0.0.0/16 if you're on a 10.0. address).
 
You may already use VNC to access your Pi's desktop across your LAN, this uses port 5900. Add a line like above to lock it down to an internal address. It's not a good idea to expose this port to the wider world because those connections are not encrypted and potentially could be subjected to a MITM attack.
 
You can query the status of the firewall like this:
ufw status 
 
And of course, try connecting remotely once you change the rules to see what works. You should consult the official documentation for further options: https://help.ubuntu.com/community/UFW
 
Back up & Recovery
 
There are again many ways to tackle this so I'll just speak about my basic precautions in this regard. Don't take it as a be-all-and-end-all!
 
The wallet.dat file is the key file (literally) containing all the private/public keys and transactions. This can be found in:
 
~/.lore 
 
You can navigate there using Jessie w/Pixel's own file manager or in a terminal window (cd ~/.lore). You can copy this file or, if you'd rather keep a plain text file of all your public and private keys, use the 'dumpwallet' command in the console. In Lore, go to Help > Debug Window > Console and type 'dumpwallet myfilename' where myfilename is the file you want it to spit out with all your keys in it. This file will end up in the same place you launch bitcoin-qt from.
 
The instructions earlier on, when running Lore for the first time intentionally left out encrypting your wallet.dat file because in order for the wallet to stake upon startup, it needs to have a decrypted key already. This isn't perfect, but after a power cycle, it would never stake unless you left it decrypted. So the best practice here is as soon as the wallet.dat file has left your device, i.e. you copy it to a USB stick for example, put it in an encrypted folder or drive (or both).
 
In Windows, one way is to use Bitlocker drive encryption for the entire drive. You should follow the instructions here to encrypt your flash drive before your wallet.dat is on there, and don't forget the password!!
http://infosec.nmsu.edu/instructions-guides/how-to-enable-bitlocker-to-go-for-external-hard-drives-and-usb-flash-drives/
 
On the Mac, I use a software package called Concealer to encrypt files I store on the Mac itself: http://www.belightsoft.com/products/conceale   There are almost certainly free packages with similar functionality, I have just used that one for years.
 
Either way, if you want to just make sure your USB drive is encrypted, you can do so in one-click in Finder before you put the sensitive files on it: http://lifehacker.com/encrypt-a-usb-stick-in-finder-with-a-click-1594798016
 
Note that these disk encryption methods may mean having to access the USB stick on a PC or Mac in order to retrieve the files in the event of a disaster. Be aware this may mean exposing them to more security issues if your computer is in any way compromised or someone nefarious has access to your computer. There are more 'manual' ways of backing up and recovering, such as literally writing down private/public key pairs which this guide doesn't go into, but may suit you better if paranoid about your setup.
 
Recovery
 
The wallet.dat file has everything in it you need to recover your wallet, or if you used 'dumpwallet', the file you saved out has all the keys.
 
Wallet.dat method: Install Lore as normal then replace any auto-generated wallet.dat in ~/.lore directory with your backup. If a lot of time has elapsed and many transactions have occurred since your backup, launch lore with:
./bitcoin-qt -rescan 
 
And if that doesn't do the job, do a full reindex of the blockchain:
 
./bitcoin-qt -reindex 
 
If you used the dumpwallet command, install Lore then place the file containing all the keys that you saved out in the same directory as bitcoin-qt. In Lore, go to Help > Debug Window > Console and type 'importwallet myfilename' where myfilename is that file containing all the keys. The wallet should automatically rescan for transactions at that point and you should be good to go.
 
There are a million ways to do effective security and disaster recovery, but I hope this shows you a couple of basic precautionary ways. There are discussions about better ways to stake without compromising too much security which are happening all the time and developments in this regard will happen in time.
 
In the meantime, feel free to comment with your best practices.
 
submitted by patcrypt to blackcoin [link] [comments]

[STIG] Windows Server 2016 Security Technical Implementation Guide

WINDOWS SERVER 2016 (STIG) OVERVIEW

SECURITY TECHNICAL IMPLEMENTATION GUIDE

Version 1, Release 4
27 April 2018
Developed by DISA for the DoD
Trademark Information
Names, products, and services referenced within this document may be the trade names, trademarks, or service marks of their respective owners. References to commercial vendors and their products or services are provided strictly as a convenience to our users, and do not constitute or imply endorsement by DISA of any non-Federal entity, event, product, service, or enterprise.
DOCUMENTATION
U_Windows_Server_2016_V1R4_Overview U_Windows_Server_2016_V1R4_Revision_History U_Readme_SRG_and_STIG
DOWNLOADS
U_Windows_Server_2016_V1R4_STIG U_STIGViewer-2.7.1 DISA.MIL STIGS HOME
TABLE OF CONTENTS
  1. INTRODUCTION 1
1.1 Executive Summary 1
1.2 Authority 1
1.3 Vulnerability Severity Category Code Definitions 1
1.4 STIG Distribution 2
1.5 Document Revisions 2
1.6 Other Considerations 2
1.7 Product Approval Disclaimer 3
  1. ASSESSMENT CONSIDERATIONS 4
2.1 Security Assessment Information 4
2.2 Windows Server 2016 Installation Options 4
2.3 Group Policy Administrative Template Additions 4
  1. GENERAL SECURITY REQUIREMENTS 5
3.1 Hardware and Firmware 5
3.2 Virtualization-Based Security Hypervisor Code Integrity 5
LIST OF TABLES
Table 1-1: Vulnerability Severity Category Code Definitions ....................................................... 2
  1. INTRODUCTION
1.1 Executive Summary
The Windows Server 2016 Security Technical Implementation Guide (STIG) is published as a tool to improve the security of Department of Defense (DoD) information systems. The requirements were developed by DoD Consensus as well as Windows security guidance by Microsoft Corporation. This document is meant for use in conjunction with other applicable STIGs including such topics as Active Directory Domain, Active Directory Forest, and Domain Name Service (DNS).
The Windows Server 2016 STIG includes requirements for both domain controllers and member servers/standalone systems. Requirements specific to domain controllers have “DC” as the second component of the STIG IDs. Requirements specific to member servers have “MS” as the second component of the STIG IDs. All other requirements apply to all systems.
1.2 Authority
DoD Instruction (DoDI) 8500.01 requires that “all IT that receives, processes, stores, displays, or transmits DoD information will be […] configured […] consistent with applicable DoD cybersecurity policies, standards, and architectures” and tasks that Defense Information Systems Agency (DISA) “develops and maintains control correlation identifiers (CCIs), security requirements guides (SRGs), security technical implementation guides (STIGs), and mobile code risk categories and usage guides that implement and are consistent with DoD cybersecurity policies, standards, architectures, security controls, and validation procedures, with the support of the NSA/CSS, using input from stakeholders, and using automation whenever possible.” This document is provided under the authority of DoDI 8500.01.
Although the use of the principles and guidelines in these SRGs/STIGs provides an environment that contributes to the security requirements of DoD systems, applicable NIST SP 800-53 cybersecurity controls need to be applied to all systems and architectures based on the Committee on National Security Systems (CNSS) Instruction (CNSSI) 1253.
1.3 Vulnerability Severity Category Code Definitions
Severity Category Codes (referred to as CAT) are a measure of vulnerabilities used to assess a facility or system security posture. Each security policy specified in this document is assigned a Severity Category Code of CAT I, II, or III.
Table 1-1: Vulnerability Severity Category Code Definitions
DISA Category Code Guidelines
CAT I Any vulnerability, the exploitation of which will directly and immediately result in loss of Confidentiality, Availability, or Integrity.
CAT II Any vulnerability, the exploitation of which has a potential to result in loss of Confidentiality, Availability, or Integrity.
CAT III Any vulnerability, the existence of which degrades measures to protect against loss of Confidentiality, Availability, or Integrity.
1.4 STIG Distribution
Parties within the DoD and Federal Government’s computing environments can obtain the applicable STIG from the Information Assurance Support Environment (IASE) website. This site contains the latest copies of any STIGs, SRGs, and other related security information. The address for the IASE site is http://iase.disa.mil/.
1.5 Document Revisions
Comments or proposed revisions to this document should be sent via email to the following address: [disa.stig_[email protected]](/). DISA will coordinate all change requests with the relevant DoD organizations before inclusion in this document. Approved changes will be made in accordance with the DISA maintenance release schedule.
1.6 Other Considerations
DISA accepts no liability for the consequences of applying specific configuration settings made on the basis of the SRGs/STIGs. It must be noted that the configuration settings specified should be evaluated in a local, representative test environment before implementation in a production environment, especially within large user populations. The extensive variety of environments makes it impossible to test these configuration settings for all potential software configurations.
For some production environments, failure to test before implementation may lead to a loss of required functionality. Evaluating the risks and benefits to a system’s particular circumstances and requirements is the system owner’s responsibility. The evaluated risks resulting from not applying specified configuration settings must be approved by the responsible Authorizing Official. Furthermore, DISA implies no warranty that the application of all specified configurations will make a system 100 percent secure.
Security guidance is provided for the Department of Defense. While other agencies and organizations are free to use it, care must be given to ensure that all applicable security guidance is applied both at the device hardening level as well as the architectural level due to the fact that some of the settings may not be able to be configured in environments outside the DoD architecture.
1.7 Product Approval Disclaimer
The existence of a STIG does not equate to DoD approval for the procurement or use of a product.
STIGs provide configurable operational security guidance for products being used by the DoD. STIGs, along with vendor confidential documentation, also provide a basis for assessing compliance with Cybersecurity controls/control enhancements, which supports system Assessment and Authorization (A&A) under the DoD Risk Management Framework (RMF). DoD Authorizing Officials (AOs) may request available vendor confidential documentation for a product that has a STIG for product evaluation and RMF purposes from [disa.stig_[email protected]](/). This documentation is not published for general access to protect the vendor’s proprietary information.
AOs have the purview to determine product use/approval IAW DoD policy and through RMF risk acceptance. Inputs into acquisition or pre-acquisition product selection include such processes as:
• National Information Assurance Partnership (NIAP) evaluation for National Security Systems (NSS) (http://www.niap-ccevs.org/) IAW CNSSP #11
• National Institute of Standards and Technology (NIST) Cryptographic Module Validation Program (CMVP) (http://csrc.nist.gov/groups/STM/cmvp/) IAW Federal/DoD mandated standards
• DoD Unified Capabilities (UC) Approved Products List (APL)
(http://www.disa.mil/network-services/ucco) IAW DoDI 8100.04
2. ASSESSMENT CONSIDERATIONS
2.1 Security Assessment Information
The Windows Operating Systems STIG Overview, also available on IASE, is a summary-level document for the various Windows Operating System STIGs. Additional information can be found there.
2.2 Windows Server 2016 Installation Options
Windows Server 2016 has two main installation options. The server core installation is the default option. This option provides a reduced footprint and attack surface in which the standard graphical user interfaces (GUIs) are not available, with a few exceptions. Interacting with the system when logged on locally is done through a command line environment. Server core installations may also be managed remotely from another system with many of the standard GUIs. Not all server roles are supported in Server core installations.
The Windows Server 2016 (Desktop Experience) installation option provides the standard interfaces for interacting with the system. This may include binaries not specifically required for the system to function and increases the attack surface.
A new installation type is Nano Server. Nano Server is reduced even further than server core. It is not created from the standard installation media. Nano Servers are created with PowerShell to include specific components required for the server.
2.3 Group Policy Administrative Template Additions
Some of the requirements in this STIG depend on the use of additional group policy administrative templates that are not included with Windows by default. These administrative template files (.admx and .adml file types) must be copied to the appropriate location in the Windows directory to make the settings they provide visible in group policy tools.
This includes settings under MS Security Guide and MSS (Legacy). The MSS settings had previously been made available through an update of the Windows security options file (sceregvl.inf). This required a change in permissions to that file, which is typically controlled by the system. A custom template was developed to avoid this.
The custom template files (MSS-Legacy and SecGuide) are provided in the Templates directory of the STIG package.
The .admx files must be copied to the \Windows\PolicyDefinitions\ directory.
The .adml files must be copied to the \Windows\PolicyDefinitions\en-US\ directory.
3. GENERAL SECURITY REQUIREMENTS
3.1 Hardware and Firmware
The virtualization-based security features, including Credential Guard, have specific hardware and firmware requirements.
Unified Extensible Firmware Interface (UEFI) is required to support Secure Boot. Current systems may have UEFI; however, it may have been configured to operate in legacy Basic Input/Output System (BIOS) mode with earlier Windows versions. Changing this will require a complete reinstallation of the operating system instead of an in-place upgrade.
The system Central Processing Unit (CPU) must also support virtualization. Again, most current CPUs have this capability; however, it may need to be enabled in the firmware.
A Trusted Platform Module (TPM) is required to store the keys used by Credential Guard. Credential Guard can function without a TPM; however, the keys are stored in a less secure method in software.
A Microsoft TechNet article on Credential Guard, including system requirement details, can be found at the following link: https://technet.microsoft.com/itpro/windows/keep-secure/credentialguard
3.2 Virtualization-Based Security Hypervisor Code Integrity
The Windows Virtualization-Based Security (VBS) Device Guard feature known as Hypervisor Code Integrity (HVCI) may cause major functional issues when running older or noncompliant drivers. The HVCI service in Windows determines whether code executing in kernel mode is securely designed and trustworthy. It offers zero-day and vulnerability exploit protection capabilities by ensuring that all software running in kernel mode, including drivers, securely allocates memory and operates as intended.
When developing or testing Windows drivers, it is critical that the drivers are “HVCI compliant”. Hardware drivers must support HVCI if the Device Guard HVCI feature is enabled on the target system. When HVCI is enforced, functional issues have been observed on older, as well as recent, hardware running non-HVCI compliant drivers. The issues are commonly encountered with kernel mode device drivers, such as video adapters, third-party disk encryption software, anti-virus/anti-malware software, or traditional “BIOS” or other firmware. The HVCI conflicts range from minor (video resolution issues) to major (boot failures or “Blue Screen”). Confirm with your hardware vendor that its drivers support HVCI and are tested before implementing the Windows Device Guard HVCI feature.
submitted by bouncethebox to STIGSP [link] [comments]

The Tyranny of the Minimum Viable User

In addressing shortcomings of a major web browser recently, I tossed out a neologism for a neologistic age: Minimum viable user.
This describes the lowest-skilled user a product might feasibly accommodate, or if you're business-minded, profitably accommodate. The hazard being that such an MVU then drags down the experience for others, and in particular expert or experienced users. More to follow.
There are cases where reasonable accommodations should be considered, absolutely. Though how this ought be done is also critical. And arbitrary exclusions for nonfunctional reasons -- the term for that is "discrimination", should you ask -- are right out.
Accessibility accommodations, in physical space and informational systems, is a key concern. I don't generally require these myself, but know many people who do, and have come to appreciate their concerns. I've also come to see both the increased imposition, and benefits, this offers by way of accommodating the needs.
It's often underappreciated how increased accessibility helps many, often all, users of a product or space. A classic instance would be pavement (or sidewalk) kerb cuts -- bringing the edge of a walkway to street level, rather than leaving a 10 cm ridge. This accommodates not just wheelchairs, but dollies, carts, wheeled luggage, and more. Benefits which materialised only after deployment, beyond the original intent.

Accessibility and Information Systems

For information systems -- say, webpages -- the accommodations which are most useful for perceptually-challenged users are also almost always beneficial to others: clear, high-contrast layouts. Lack of distracting screen elements. A highly semantic structure makes work easier for both screen-readers (text-to-speech) and automated parsing or classification of content. Clear typography doesn't fix all copy, but it makes bad copy all the more apparent. Again, positive externalities.
When we get to the point of process-oriented systems, the picture blurs. The fundamental problem is that an interface which doesn't match the complexity of the underlying task is always going to be unsatisfactory. Larry Wall has observed this with regard to the Perl programming language: complexity will out. In landscape design, the problem is evidenced by the term "desire path". A disagreement between use and design.[1]
At its heart, a desire path is the failure for designer to correctly anticipate, or facilitate, the needs and desires of their users. Such paths reflect emergent practices or patterns, some constructive, some challenging the integrity of a system. Mastodon Tootstorms are an example of a positive creative accommodation. Mostly.
On other services, the lack of an ability to otherwise dismiss content frequently creates an overload of the spam or abuse reporting mechanism. G+ comes to mind. If a side-effect of reporting content is that it is removed from my view, and there is no other way to accomplish that goal, then the reporting feature becomes the "remove from visibility" function. I've ... had that conversation with Google for a number of years. Or is that a monologue...
Software programming is in many ways a story of side-effects and desire paths, as is the art of crafting system exploits. PHP seems particularly prone to this, though I can't find the character-generating hack I've in mind.
There's the question of when a system should or shouldn't be particularly complex. Light switches and water taps are a case in point. The first has operated as a simple binary, the second as a variable-rate flow control, and the basic functionality has remained essentially unchanged for a century or more. Until the Internet of Broken Shit that Spies on you wizkids got ahold of them.... And modulo some simple management interfaces: timers or centralised large-building controls.
Simple tasks benefit from simple controls.
Complex tasks ... also benefit from simple controls, but no simpler than the task at hand.
A good chef, for example, needs only a modicum of basic elements. A good knife. A reliable cooktop and oven. A sink. A cutting surface. Mixing bowls. Underappreciated: measuring equipment. Measuring spoons, cups, pitchers. A scale. Thermometer. Timers. The chef also may have call for some specific processing equipment: cutting, chopping, blending, grating, and mixing tools. Powering these increases throughput, but the essential controls remain simple. And some specialised tools, say, a frosting tube, but which generally share common characteristics: they're individually simple, do one thing, usually a basic transformation, and do it well.
The complexity of the process is in the chef, training, and practice.
The antithesis of this is "cooking gadgets" -- tools or appliances which are complicated, fussy, achieve a single and non-general result, or which integrate (or attempt to do so) a full process. This is the stuff that clutters counter space and drawers: useless kitchen gadgets. A category so egregious it defies even simple listing, though you're welcome to dig through search results.
If you can only use it on one recipe, it's bad mkay?

Appropriateness of Single-use Tools: Safety equipment

On single-use tools: if that single use is saving your life in conditions of readily forseeable peril, then it may well be worth having. Lifeboats. Seatbelts. First aid kit.
That gets down to a risk assessment and mitigation calculation problem though, which may be error-prone: over- and under-estimating risks, and/or the efficacy of mitigations. Pricing risk and risk-as-economic good is another long topic.

Lifts, Telephones, and Automobiles

There are times when you absolutely should be aiming for the minimum viable user. Anything that sees widespread shared public use, for example. I shouldn't have to read the user manual to figure out how to open the front door to your building. Automatic, sensored doors, would be an entirely MVU product.
I've mentioned lifts, automobiles, and telephones. Each is highly complex conceptually, two can maim or kill. All can be relatively safely used by most adults, even children. A large part of what makes lifts, automobiles, and telephones so generally usable is that the controls are very highly standardised. Mostly. The exceptions become newsworthy.
Telephones have deviated from this with expansion of mobile and even more complex landline devices. And the specific case of business-oriented office telephones has been for at least 30 years, a strong counterexample, worth considering.

Office Phone Systems

It takes me a year or more to figure out a new office phone system. If ever. A constant for 30 years. This wasn't the case as of the 1980s, when a standard POTS-based phone might have five buttons, and the smarts were in a PBX generally located within the building.
By the 1990s, though, "smart phones" were starting to appear. Rolm was one early vendor I recall. These had an increasing mix of features, not standardised either across or within vendor lines, but generally some mix of:
  1. Voicemail
  2. Call forwarding
  3. Call conferencing
  4. Lots of other random shit to inflate marketing brochures
Feature #4 was a major problem, but the underlying one was, and remains, I think, the mismatch of comms channels and cognitive capacities a phone represents: audio, physical, textual, and short-term working memory.
The physical interface of most phones -- and I'm referring to desk sets here -- is highly constrained. There's a keypad, generally 12 buttons (not even enough for the impoverished Roman alphabet, let alone more robust ones), possibly an additional set of function buttons, and a handset, plus some base. Cords.
More advanced phonesets have perfected the technology of including a display for text which is simultaneously unreadable under any lighting conditions, viewing angles, or capable of providing useful information in any regard. This another engineering accomplishment with a decades-long record.
Phones are relatively good for talking, but they are miserable for communication. Reflected by millennials disdain for making phone calls Millennials prefer text-based apps to voice comms, as do numerous tech early-adopters. I suspect the reason is both the state-maintenance and fragility of phone-based communications.
I'm distinguishing talking -- a longer and wandering conversation with a friend -- and communicating -- the attempt to convey or obtain some specific task-oriented or process-oriented information. The salient difference is that the latter is very strongly goal oriented, the former, not so much. That is, a "simple" phone conversation is a complex interaction and translation between visual, textual, audio, physical, and memory systems. It's also conducted without the visual cues of face-to-face communications (as are all remote comms), for further fun and games. This usually makes conversations with someone you know well (for whom you can impute those cues) generally far more straightforward than with a stranger, especially for complex discussions.
The upshot is that while a telephone is reasonably simple to use in the basic case -- establish a voice connection with another device generally associated with a person or business -- it actually fails fairly profoundly in the surrounding task context for numerous reasons. Many of which boil down to an interface which is simultaneously oversimplified and poorly suited to the task at hand.
Smartphones, and software-based telephony systems in general, followed the business phone lead.
Mobile comms generally have expanded on failures of business phone systems in poor usability as phones by significantly deteriorating audio quality and dynamics -- constraints of packet-switching, compression, additional relay hops, and speed-of-light delays have boosted noise and lag to the level of interfering with the general flow of conversation. Which isn't particularly an interface failure as such (this is channel behaviour), but it encourages the shift to text of millennials.
I'll save the question of how to fix voice comms for discussion.
The point I'm making is that even an apparently straightforward device and task, with a long engineering history, can find itself ill-matched to new circumstances.
There's also much path-dependence here. Lauren Weinstein on G+ enjoys digging up old AT&T engineering and marketing and/or propaganda newsreels describing development of the phone system: direct-dial, switching, 7-digit, area-code, long-distance, touch-tone. There were real and legitimate design, engineering, and use considerations put into each of these. It's not as if the systems were haphazardly put together. This still doesn't avoid the net result being a bit of a hash.
An appreciation of why Mr. Chesterton built his fence , and whether or not that rationale remains valid, is useful to keep in mind. As are path-dependencies, 2nd-system effects, and late-adopter advantages. Those building out interdependent networks after initial trial often have a significant advantage.
It's also interesting to consider what the operating environment of earlier phones was -- because it exceeded the device itself.
A business-use phone of, say, the 1970s, existed in a loosely-integrated environment comprising:
Critically: these components operated simultaneously and independently of the phone.
A modern business, software, or smartphone system may offer some, or even all, of these functions, but frequently:
The benefits are that they are generally cheaper, smaller, more portable, and create digital data which may be, if accessible to other tools, more flexible.
But enough of phones.

The Unix Philosophy

The Unix Philosophy reads: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."
It offers a tremendous amount of mileage.

Exceptions to the Unix Philosophy: Complexity Hubs

I want to talk about the apparent exceptions to the Unix philosophy: shells, editors, email, init (and especially systemd), remote filesystems, udev, firewall rules, security generally, programming languages, GUIs.
Apparently, "exceptions to the Unix philosophy" is very nearly another neologism -- I find a single result in Google, to an essay by Michael O. Church. He adds two more items: IDEs (integrated developer environments), arguably an outgrowth of editors, and databases. Both are solid calls, and both tie directly into the theme I had in mind in the preceding toot.
These are all complexity hubs -- they are loci of either control or interfacing between and among other systems or complex domains:

The GUI Mess

This leaves us with GUIs, or more generally, the concept of the domain of graphics.
The complexity here is that graphics are not text. Or at the very least, transcend text. It is possible to use text to describe graphics, and there are tools which do this: Turtle. Some CAD systems. Scalable vector graphics (SVG). But to get philosophical: the description is not the thing. The end result is visual, and whilst it might be rule-derived, it transcends the rule itself.
One argument is that when you leave the domain of text, you leave the Unix philosophy behind. I think I'm OK with that as a starting premise. This means that visual, audio, mechanical, and other sensory outputs are fundamentally different from text, and that we need to keep in mind that text, whilst powerful, has its limits.
It's also to keep in mind, though, what the characteristics and limits of GUIs themselves are.
Neal Stephenson, "In the Beginning was the Command Line", again, offers one such: Metaphor sheer. Most especially where a GUI is used to represent computer system elements themselves, it's crucial to realise that the representation is not the thing itself -- map-territory confusion. In fact a GUI isn't so much a representation as a remapping of computer state.
Unix, the C programming language, and the bash shell all remain relatively close to machine state. In many cases, the basic Unix commands are wrappers around either C language structures (e.g., printf(1) and printf(3)), or report the content of basic data structures (e.g., stat(1) and stat(2)). Even where the concept is reshaped significantly, you can still generally find the underlying concept present. This may be more foreign for newbies, but as exposure to the system is gained, interface knowledge leverages to system knowledge.
GUIs lose this: represented state has little coherence.
Some argue that not being tied to the mechanism is an advantage -- that this allows the interface designer a freedom to explore expressions independent of the underlying mechanism.
This is true.
But it gets to another set of limitations of GUIs:
Scripting has the effect of constraining, for better or worse, changes to interfaces because scripts have to be updated as features change. The consequence is that tools either don't change arguments, change them with exceedingly long advance warning, or failing either of those, are rapidly discarded by those who use them due to gratuitous interface changes. The result is a strong, occasionally stifling, consistency over time.
The limits on information density and on scaling or scrolling are another factor. A good GUI might offer the ability to expand or compress a view by a few times, but it takes a very creative approach to convey the orders of magnitude scales which, say, a physical library does. Data visualisation is its own specialty, and some are good at it.
The result is that most GUI interfaces are good for a dozen, perhaps a few dozens, objects.
Exceptions to this are telling. xkcd is on the money: https://www.xkcd.com/980/ This chart manages to show values from $1to $2.39 quadrillion ($2.39 thousand million million), within the same visualisation, a span of 15 orders of magnitude, by using a form of logarithmic scaling. This is possible, but it is difficult to do usefully or elegantly.

GUIs, Efficiency, and Change

Change aversion and inherent limits to GUI productivity interact to create the final conflict for GUIs: the potential for interface efficiency is limited and change is disruptive, you lose for trying. Jamie "jwz" Zawinski notes this:
Look, in the case of all other software, I believe strongly in "release early, release often". Hell, I damned near invented it. But I think history has proven that UI is different than software.
What jwz doesn't do is explain why this is, and I'm not aware of others who have.
This also shows up in the case of Apple, a company which puts a premium on design and UI, but which is exceedingly conservative in changing UI. The original Mac desktop stuck with its initial motif from 1984 until 2001: 17 years. It successor has changed only incrementally from 2001 to 2017, very nearly as long. Even Apple realise: you don't fuck with the GUI.
This suggests an underlying failure of the Linux desktop effort isn't a failure to innovate, but rather far too much churn in the desktop.
My daily driver for 20 years has been Window Maker, itself a reimplementation of the 1989 NeXT desktop. Which is to say that a 30 year-old design works admirably. It's fast, stable, doesn't change unexpectedly with new releases or updates, and gets the fuck out of the way. It has a few customisations which tend to focus on function rather than form.

The Minimum Viable User GUI and Its Costs

Back to my starting premise: let's assume, with good reason, that the Minimum Viable User wants and needs a simple, largely pushbutton, heavily GUI, systems interface.
What does this cost us?
The answer is in the list of Unix Philosophy Violating Tasks:

Just Who is the Minimum Viable User?

A central question, and somewhat inexcusably buried at this point in my essay, is who is the Minimum Viable User? This could be the lowest level of system skills capable of using a device, which an OECD survey finds is abysmally bad. Over half the population, and over 2/3 in most surveyed industrialised countries, have poor, "below poor", or no computer skills at all.
I'm moving past this point quickly, but recommend very strongly reading Jacob Nielsen's commentary on this study, and the study itself: "Skills Matter: Further Results from the Survey of Adult Skills" (OECD, 2016). The state of typical user skills is exceedingly poor. If you're reading this essay, you're quite likely not among them, though if you are, the comment is simply meant without disparagement as a statement of fact: from high to low, the range of user computer skills is enormous, with the low end of the range very highly represented in the general population. People who, largely, otherwise function quite well in society: they have jobs, responsibilities, families.
This has profound implications for futures premised on any sort of general technical literacy. As William Ophuls writes in Plato's Revenge, social systems based on the premise that all the children are above average are doomed to failure.
The main thrust of this essay though is a different concern. Global information systems which are premised on a minimal-or-worse level of sophistication by all users also bodes poorly, though for different reasons: it hampers the capabilities of that small fraction -- 5-8% or less, and yes, quite probably far less -- of the population who can make highly productive use of such tools, by producing hardware and software which fails to support advanced usage.
It does this by two general modes:
The dynamics are also driven by market and business considerations -- where the money is, and how development, shipping, and maintaining devices relates to cash flows.

The Problem-Problem Problem

One business response is to extend the MVU definition to that of the Minimum Viable-Revenue User: services are targeted at those with the discretionary income, or lack of alternatives, to prove attractive to vendors.
There's been well-founded criticism of Silicon Valley startups which have lost track of what a meaningful problem in need of solution. It's a problem problem. Or: The problem-problem problem.
Solving Minor Irritations of Rich People, or better, inventing MIoRP, as a bootstrapping method, has some arguable utility. Telsa Motors created a fun, but Very ExpensiveTM , electrified Lotus on its way to creating a viable, practical, battery-powered, Everyman vehicle. Elon Musk is a man who has made me a liar multiple times, by doing what I unequivocally stated was impossible, and he impresses the hell out of me for it.
Amazon reinvented Sears, Roebuck, & Co. for the 21st century bootstrapped off a books-by-mail business.
I'm not saying there ain't a there there. But I'm extremely unconvinced that all the there there that's claimed to be there is really there.
Swapping out the phone or fax in a laundry, food-delivery, dog-walking, or house-cleaning business is not, in the larger scheme of things, particularly disruptive. It's often not even a particularly good business when catering to the Rich and Foolish. Not that parting same from their easily-won dollars isn't perhaps a laudable venture.
The other slant of the Minimum Viable User is the one who is pushed so far up against the wall, or fenced in and the competition fenced out, that they've no option but to use your service. Until such time as you decide to drag them off the plane. Captive-market vendor-customer relationship dynamics are typically poor.
For numerous reasons, the design considerations which go into such tools are also rarely generative. Oh: Advertising is one of those domains. Remember: Advertising breeds contempt.
Each of these MVU business cases argues against designing for the generative user. A rather common failing of market-based capitalism.
Robert Nozick explains criticism of same by creatives by the fact that "by and large, a capitalist society does not honor its intellectuals". A curious argument whose counterpoint is "capitalism is favoured by those whom it does unduly reward".
That's solipsistic.
Pointing this out is useful on a number of counts. It provides a ready response to the Bullshit Argument that "the market decides". Because what becomes clear is that market forces alone are not going to do much to encourage generative-use designs. Particularly not in a world of zero-marginal-cost products. That is: products whose marginal costs are small (and hence: pricing leverage), but with high fixed costs. And that means that the market is going to deliver a bunch of shitty tools.

Getting from Zero to One for Generative Mobile Platforms

Which suggests one of a few possible avenues out of the dilemma: a large set of generative tools have been built through non-capitalistic organisation. The Free Software / Open Source world would be a prime case in point, but it's hardly the first. Scientific research and collaboration, assembly of reference tools, dictionaries, encyclopedias. That's an option.
Though they need some sort of base around which to form and organise. And in the case of software they need hardware.
For all the evil Bill Gates unleashed upon the tech world (a fair bit of it related to the MVU and MFVU concepts themselves), he also unleashed a world of i386 chipset systems on which other software systems could be developed. Saw to it that he individually and specifically profited from every one sold, mind. But he wasn't able to restrict what ran on those boxes post-delivery.
GNU/Linux may well have needed Bill Gates. (And Gates may well have not been able to avoided creating Linux.)
There are more smartphones and Android devices today than there ever were PCs, but one area of technical advance over the decades has been in locking systems down. Hard. And, well, that's a problem.
I don't think it's the only one, though.
Commodity x86 hardware had a model for the operating system capable of utilising it which already existed: Unix. Linus Torvalds may have created Linux, but he didn't design it as such. That template had been cut already. It was a one-to-two problem, a question of scaling out. Which is to say it wasn't a Zero to One problem.
And yes, Peter Thiel is an evil asshat, which is why I'm pointing you specifically at where to steal his book. That's not to say he isn't an evil asshat without the occasional good idea.
I'm not sure that finding (and building) the Open Mobile Device Environment is a Zero to One problem -- Google, well, Android Inc., leveraged Linux, after all. But the design constraints are significantly different.
A standalone PC workstation is much closer to a multi-user Unix server in most regards, and particularly regards UI/UX, than is a mobile device measuring 25, or 20, or 12, or 8 cm. Or without any keyboard. Or screen. And a certain set of tools and utilities must be created.
It's not as if attempts haven't been made, but they simply keep not getting anywhere. Maemo. FirefoxOS. Ubuntu Phone. Hell, the Psion and Palm devices weren't bad for what they did.
Pick one, guys & gals. Please.

The Mobile Applications Ecosystem is Broken

There's also the question of apps, and app space, itself. By one school of thought, a large count of available applications is a good thing. By another, it's a sign of failure of convergence. As of 2017, there are 2.5 million Google Play apps.
Is it even worth the search time? Is meaningful search of the space even possible?
The question occurs: is it really in Google's interest to proliferate applications which are separate, non-integrated, split development efforts, and often simply perform tasks poorly?
Why not find a way to focus that development effort to producing some truly, insanely, great apps?
The consequences are strongly reminiscent of the spyware and adware problem of desktop Windows in the early 2000s. For the same reason: competitive software development incentivises bad behaviour and poor functionality. It's the Barbarians at the Gate all over again. With so many independent development efforts, and such an inefficient communications channel to potential users, as well as poor revenue potential through kosher methods, the system is inherently incentivised to exceedingly user-hostile behaviour.
A valid counterargument would be to point to a set of readily-found, excellent, well-designed, well-behaved, user-centric tools fulfilling fundamental uses mentioned in my G+ post. But this isn't the case. Google's Play Store is an abject failure from a user perspective. And catering to the MVU carries a large share of the blame.
I'm not saying there should be only one of any given application either -- some choice is of value. Most Linux distributions will in fact offer a number of options for given functionality, both as shell or programming tools (where modular design frequently makes these drop-in replacements, down to syntax), and as GUI tools.
Whilst "freedom to fork" is a touted advantage of free software, "capacity to merge" is even more salient. Different design paths may be taken, then rejoined.
There's another line of argument about web-based interfaces. I'll skip much of that noting that the issues parallel much of the current discussion. And that the ability to use alternate app interfaces or browser site extensions is critical. Reddit and Reddit User Suite, by Andy Tuba, are prime exemplars of excellence in this regard.

Related Reading

A compilation of articles reflecting this trend.

Bootnote

Yes, this is a lot of words to describe the concept generally cast as "the lowest common denominator". I'm not claiming conceptual originality, but terminological originality. Additionally:
This post was adapted from an earlier Mastodon Tootstorm.

Notes

  1. Reddit fans of the concept might care to visit /DesirePaths.
submitted by dredmorbius to dredmorbius [link] [comments]

FTP Command Prompt Connecting and Downloading Files How to Fix .exe Setup Files Not Opening in Windows 10 (These files can’t be opened) How to install Python 3.8.0 on Windows 10 with CMD configuration Metatrader With IML Harmonic Scanner Installation & Setup How to Fix Language Problem of Non Unicode Program in Windows 10 - Simple Fix

I have been roaming the internet trying to find a solution, but haven't found it yet. My problem is: i can't install tidytext. I also found out I can't re-install tidyverse for some reason. The copy built exe binary to a new folder; open cmd console in that folder; call windeployqt using its full path (if it is not in the system PATH) and provide your executable, for example: c:\Qt\Qt5.5.1-vs2013-x64\5.5\msvc2013_64\bin\windeployqt.exe application.exe As a result you have in that folder all needed Qt DLLs. On Windows this option will have no effect if the first script is a ‘.pyw’ file.-w, --windowed, --noconsole Windows and Mac OS X: do not provide a console window for standard i/o. On Mac OS X this also triggers building an OS X .app bundle. On Windows this option will be set if the first script is a ‘.pyw’ file. The -c or -l and option is mandatory; the others are optional. Copy the DPDK application binary to your target, then run the application as follows (assuming the platform has four memory channels per processor socket, and that cores 0-3 are present and are to be used for running the application): The gacutil tool is included with your Visual C# installation.For example, if you have installed Visual Studio 2010 in C:\Program Files, the path to gacutil is C:\Program Files\Microsoft SDKs\Windows\v7.0A\bin\gacutil.exe. Once installed in the GAC, the assemblies will always be located correctly without having to set environment variables or copy them into the same directory as an executable.

[index] [17847] [5920] [20289] [2756] [29492] [25766] [12795] [27614] [5902] [7134]

FTP Command Prompt Connecting and Downloading Files

How to Fix Language Problem of Non-Unicode Program in Windows 10 ? You can fix this problem by changing the language settings for non-unicode programs: To do this, go to the control panel – Open ... how to fix software setup .exe file not opening, can't run .exe file Codes: "%1"%* assoc .exe=exefile Facebook Page : https://www.facebook.com/MeMJTube Follo... This video shows you how to enable remote desktop connections to your Windows 10 computer. Remote desktop allows you to remotely connect to a graphical interface on your Windows 10 computer. Download and setup of Metatrader 4 desktop application with IML Harmonic Scanner installation and setup. This is a link to download MT4 and IML Harmonic Scan... Hello friends, In this video, I illustrate how to download Python 3.8.x 64 bit on Windows 10 and install it on the Windows 64 bit system. How to download and install python 3.8 on windows 10 64 ...

Flag Counter