shdwGlossary
A list of terms that relate to all things SHDW!
Asynchronous Byzantine Fault Tolerance (aBFT): aBFT is a consensus algorithm used to provide fault tolerance for distributed systems within asynchronous networks. It is based on the Byzantine Fault Tolerance (BFT) algorithm and guarantees that, when properly configured and given tolerable network conditions, its consensus will eventually be reached — even when faults such as computer crashes, perfect cyber-attacks, or malicious manipulation have happened. aBFT is designed to be tolerant of malicious or faulty actors even when those actors make up more than one-third of the nodes inside the network. Blockchain: A peer to peer, decentralized, immutable digital ledger used to securely and efficiently store data that is permanently linked and secured using cryptography. Byzantine Fault Tolerance: A fault-tolerance system in distributed computing systems where multiple replicas are used to reach consensus, such that any faulty nodes can be tolerated and the consensus of the system can be maintained despite errors or malicious attacks. Consensus Algorithm: A consistency algorithm is a method of achieving agreement on a single data value across distributed systems. It is typically used in blockchain networks to arrive at a common data state and ensure consensus across network participants. It is used to achieve fault-tolerance in distributed systems and allows for the distributed system to remain operational even if some of its components fail. Consensus Protocols: A consensus protocol is an algorithm used to achieve agreement among distributed systems on the state of data within the system. Consensus protocols are essential in distributed systems to ensure there is no disagreement between the nodes on the data state, so that no node has a different view of the data. The protocols typically employ some form of voting, such as majority voting, or methods like proof of work, to achieve the necessary agreement on the data state. Cryptographic Hashing: Cryptographic hashing is a process used to convert data into a fixed-length string of characters, or "hashes", in order to protect the data and verify its authenticity through an encrypted code. The hash cannot be reversed to the original data and is used extensively time to ensure data integrity. Cryptographic hashes can be used to verify data integrity, authenticate data sources, and prevent tampering. Cryptography: The practice and study of techniques used to secure communication, data, and systems by transforming them into an unreadable format. Cryptography is an important component of cybersecurity, providing data protection and confidentiality. Data Caching: Data caching is a software engineering technique that stores frequently accessed or computational data in a cache in order to quickly access that data when it is needed. Data caching works by temporarily storing data to serve it quickly when requested. This can significantly improve the performance of applications, which reduces the amount of time spent serving requests. Data Encryption: The process of encoding data using encryption algorithms in order to keep the content secure and inaccessible to user without a decryption key. Data encryption makes it difficult for unauthorized users to read confidential data. Data Integrity: A quality of digital data that has been maintained over its entire life cycle, and is consistent and correct. The term is ensured through specific protocols such as data validation and error detection and correction. Data Partitioning: The process of splitting large data or throughputs into smaller, more manageable units. This process allows for more accuracy in data processing and faster results, as well as the ability to easily store and access the data. Data Sharding: Data Sharding is a partitioning technique which divides large datasets into smaller subsets which are stored across multiple nodes. It is used to improve scalability, availability and fault tolerance in distributed databases. When combined with replication, Data Sharding can improve the speed of queries by allowing them to run in parallel. Decentralized Storage Network: A Decentralized Storage Network is a type of distributed system which utilizes distributed nodes to store and serve data securely, without relying on a single point of failure. It is an advanced form of data storage and sharing which provides scalability and redundancy, allowing for more secure and reliable access than that offered by conventional centralized systems. Delegated Proof-of-Stake (DPoS): Delegated Proof-of-Stake (DPoS) is a consensus mechanism used in some blockchain networks. It works by allowing token holders to vote for a “delegate” of their choice to validate transactions and produce new blocks on their behalf. Delegates are rewarded for their work with a percentage of all transaction fees and new block rewards. DPoS networks are usually faster and more scalable than traditional PoW and PoS networks. Digital Signatures: A digital signature is a mathematical scheme for demonstrating the authenticity of digital messages of documents. It is used for authenticating the source of messages or documents and for providing the integrity of messages or documents. A valid digital signature gives a recipient reason to believe that the message or document was created or approved by a known sender, and has not been altered in transit. Directed Acyclic Graphs: A Directed Acyclic Graph (DAG) is a type of graph with directed edges that do not form a cycle. It consists of vertices (or nodes) and edges that connect the nodes. The direction of the edges between the nodes determines the flow of information from one node to another. A DAG is a useful structure for modeling data sets, such as queues and trees, to represent complex algorithms and processes in computer engineering. Distributed Computing: A type of computing in which different computing resources, such as memory, hard disk space, and processor time are divided between different computers working as a single system. This gives the benefits of distributed computing like scalability, load balancing, reduced latency, and improved resiliency. It is widely used in data centers and cloud computing. Distributed Consensus: The process of having a distributed system come to agreement on the state of an issue or order of operations. This is achieved through communication, verification and agreement from each of the connected nodes in the system. Distributed Database: A distributed database is a type of database which stores different parts of its data across multiple devices that are connected to computer networks, allowing for more efficient data access by multiple users and more efficient transfer of data between different locations. Distributed File System: A distributed file system is a file system that allows multiple computers to access and share files located on remote servers. It enables a computing system that consists of multiple computers to work together in order to store and manage data. The system can be seen as a large-scale, decentralized file storage platform that spans multiple nodes. Data is replicated across the computers so that if one computer goes down, another can take its place, which helps provide for high availability, scalability, and fault tolerance. Distributed Ledger Technology: A type of database technology that maintains a continuously growing list of records, each stored as a block and protected by cryptography. It is a database shared across multiple nodes in a network, that keeps track of digital transactions between the nodes and is protected by airtight security and tamper proof mechanisms. Transaction records are constantly updated and can be easily accessed. Distributed Ledger Technology Platforms: A distributed ledger technology (DLT) platform is a secure digital platform that allows data to be securely shared and stored in a decentralized manner across a wide range of nodes within a network. DLT platforms provide a distributed ledger solution that is tamper-resistant, secured using cryptographic encryption protocols, and can remain resilient even in the face of malicious actors. DLT platforms also enable secure and timely data exchanges, providing scalability, reliability and transparency for transactions. Distributed Storage Services: Services that allow users to store data across multiple physical storage locations. This increases reliability and availability of the data and allows for distributed workloads to take advantage of this. Distributed Storage System: A distributed storage system is a series of computer systems which interact together to manage, store, and back up massive amounts of data in a reliable and secure way. A distributed storage system is more fault-tolerant than conventional storage systems because its components are not vulnerable to a single point of failure. This allows a distributed storage system to provide higher levels of data durability and availability than a single, centralized system. Distributed Systems: A type of computing system consisting of multiple, independent components that can communicate with each other to coordinate and synchronize their actions, creating a unified system as a whole. It is a type of software architecture that is designed to maximize resources across multiple computers connected over a network. Distributed Systems Architecture: A distributed system is an interconnection of multiple independent computers that coordinate their activities and share the resources of a network, usually communicating via a message-passing interface or remote procedure calls. Distributed systems architecture includes design guidelines and approaches to ensure a system’s resilience, performance, scalability, availability, and security. Erasure Coded File Storage: Erasure coded file storage is an archive strategy that divides data into portions, and then encodes each portion multiple times using various error-correction algorithms. This redundancy makes erasure coding valuable for permanently storing critical data as it increases reliability but reduces storage costs. Fault Tolerance: Fault tolerance is the capability of a system to continue its normal operation despite the presence of hardware or software errors, disruptions, and even loss of components or data. Fault tolerant systems are designed to be resilient to failure and to continue normal operation in the event of a partial or total system failure. Gossip Protocol: A distributed algorithm in which each member of a distributed system periodically communicates a message to one or more nodes, which then send the same message onto other nodes, until all members of the network receive the same message. This allows for a system to remain aware of all other nodes and records in the system without the need for a central server. Hashgraph: A distributed ledger platform that uses a virtual voting algorithm to achieve consensus faster than a traditional distributed ledger. This system allows for the secure transfer of digital objects and data without needing a third party for authentication. Its consensus algorithm is based on a gossip protocol, where user nodes are able to share news and updates with each other. Hyperledger Fabric: Hyperledger Fabric is an open source software project that provides a foundation for developing applications that use distributed ledgers. It allows for secure and permissioned transactions between multiple businesses, enabling an ecosystem of participants to securely exchange data and assets. It provides support for smart contracts, digital asset management, encryption, and identity services. Immutability: The capacity of an object to remain unchanged over time, even after multiple operations and modifications. Additionally, immutability is a property by which the object remains unchanged, and all operations on that object return a new instance instead of modifying the original. Interoperability: The ability of systems or components to work together and exchange data, even though they may be from different manufacturers and have different technical specifications. Key Management: Key management is the overall management of a set of cryptographic keys used to protect data both in transit over a network and in storage. Proper key management encryption and control ensure the secure communication within applications and systems and provides a baseline for protecting sensitive information. Key managers are responsible for the storage, rotation, and renewal of encryption keys and must ensure that the data is secure against unauthorized disclosure, modification, inclusion, or loss. Merkle Tree: A Merkle tree is a tree-based data structure used for verifying the integrity of large sets of data. It consists of a parent-child relationship of data nodes, with each block in the tree taking the data of all child blocks, hashing it together, and then creating a hash of its own. This process is repeated until a single block remains, which creates a hierarchical and hashed structure. Multi-Node Clusters: A type of computing architecture, composed of multiple connected computers, or nodes, that work together and are able to act as a single system. A multi-node cluster allows for increased information processing and storage capacity, as well as increased reliability and availability. Nodes: A node is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Nodes are used to implement graphs, linked lists, trees, stacks, and queues. They are also used to represent algorithms and data structures in computer science. Oracles: An oracle is a system designed to provide users with a result of a query. These systems are often relied on to deliver accurate and reliable conclusions to users, based on the data they have provided. There have been several generations of oracles, ranging from hardware to software-based systems, each of which has different sets of capabilities. Orchestration: Orchestration is the process of using software automation to manage and configure cloud-based computing services. It is used to automate the management and deployment of workloads across multiple cloud platforms, allowing organizations to gain efficiency and scalability. Peer-to-Peer Networking: Peer-to-peer networking is a type of network architecture model where computers (or peer nodes) are connected together in such a way that each node can act as a client or a server for the other nodes in the network. In other words, it does not rely on a central server to manage the communication between the connected peers in the network. Proof of Stake (PoS): A consensus mechanism in blockchain technology which allows nodes to validate transactions and produce new blocks according to the amount of coins each node holds. It is an alternative to the Proof of Work (PoW) consensus protocol. In PoS, validators stake their coins, meaning they have to deposit coins with the blockchain protocol before they can validate blocks. Validators receive rewards for creating blocks and are penalized for malicious behavior. Proof of Storage (PoSt): Proof of Storage is a consensus cryptographic mechanism which is used to attest to the storage of data in a distributed storage network. The consensus model requires a randomly chosen subset of storage miners to periodically provide cryptographic proofs that they are storing data correctly and that all necessary nodes are online. The goal of PoSt is to ensure that data stored is secure, as well as increase the security and transparency of distributed storage networks. Proof of Work (PoW): A consensus algorithm by which a set of data called a “block” is validated by solving computationally-taxing mathematical problems. It is used as a security measure in blockchains as each block that is created must have a valid PoW for the block to be accepted. The difficultly of the computational problems get more difficult with time as the blockchain grows, incentivizing the participants to continue to keep the chain running. Proof-of-Stake (PoS): A consensus mechanism used in certain distributed ledger system, where validator nodes participate in block validation with a combination of network participation and staking of token or other resources. The Proof-of-Stake consensus is an alternative to Proof-of-Work (PoW) used in many other blockchain networks. Proof-of-Work: A concept used by blockchain networks to ensure consensus by requiring a certain amount of effort or work to make sure the blocks of data in the chain are legitimate and valid. This is done by large amounts of computational work, typically hashing algorithms. Public Distributed Ledger: A public distributed ledger is a decentralized database that holds information about all the transactions that have taken place across the network, distributed and maintained by all participants without the need for a central authority to manage and validate it. All participants in the network can access, view, and validate the data stored on the ledger. Public Key Infrastructure (PKI): A set of protocols, services, and standards that manage, distribute, identify and authenticate public encryption keys associated with users, computers, and organizations in a network. PKI allows secure communication and ensures that only the intended recipient can read the message. QuickP2P: A type of networking protocol focused on peer-to-peer (P2P) computing that allows users to exchange files and data quickly across multiple computers. QuickP2P works by breaking the files into small blocks, which can then be rapidly downloaded separately in parallel by the searching user. Quorum: The minimum number of nodes in a distributed system that must be engaged in order to reach a consensus. Quorum value is set based on the number of nodes in the system and must be greater than half of the total nodes present in the system. Replication: The process of creating redundant copies of data or objects in distributed systems for the purpose of fault-tolerant data storage or network operations. Routing Protocols: Protocols that direct traffic between computers across a network or the Internet, determining the routing path for data packets and exchanging information about available routes between routers. Routing protocols define the way routers communicate with each other to exchange network information and configure the best path for data traffic. Scalability: The ability of a system to increase its performance or capacity when given additional resources such as additional computing, memory, data storage, network bandwidth and power. Scalability is an important consideration for developing systems that must handle increasing amounts of data, workload or users. Secure Messaging: Secure messaging is the process of sending or receiving encrypted messages between two or more parties, with the intent of ensuring that only the intended recipient can access the contents of the message. The encryption process generally involves the use of public and private keys, making it secure and nearly impossible for anyone else to intercept any messages sent over the network. Secure Multi-Party Computation (MPC): A computer security framework that allows several parties to jointly compute a function over their private inputs without revealing anything other than their respective outputs to the other parties. MPC leverages the security of cryptography in order to achieve privacy and security in computation. Security Protocols: Security protocols are systems of standard rules and regulations created by computer networks to protect data and enable secure communication between devices. They are commonly used for authentication, encryption, confidentiality and data integrity. Smart Contracts: A type of protocol that is self-executing, autonomously carrying out the terms of an agreement between two or more parties when predetermined conditions are met. They are used to exchange money, assets, shares, or anything of value without the need for a third party or intermediary in a secure and trustless manner. Smart contracts are used widely within blockchain-based applications to reduce risk and increase speed and accuracy. Smart Contracts: Contracts written in computer code that are stored and executed on a blockchain network. Smart contracts are self-executing and contain the terms of an agreement between two or more parties that are written directly into the code. Smart contracts are irreversible and are enforced without the need for a third-party. State Machines: A state machine is a system composed of transitioning states, where each state is determined by the previous state and the current inputs. Each state has attached conditions and outputs, and when a the previous state and input conditions match the conditions of the current state, a single output will be generated. State machines are commonly used in computer engineering for finite automation. Synchronization Methods: Methods of ensuring that different parts of a distributed application or system are working from a shared same set of data at a given point in time. This can be achieved by actively sending data among components or by passively waiting for components to ask for data before sending it. Synchronous Byzantine Fault Tolerance (SBFT): A consensus algorithm for blockchain networks that requires each node to be connected and active for all network messages to be exchanged and validated in a given time period. This algorithm provides a way for a distributed network to come to consensus, even when some participants may be compromised or malicious. Time-Stamping: The technique of assigning a unique and precise time value to all events stored in a record in order to define an chronological order of those events. Virtual Voting: Digital voting system where citizens are able to cast their vote over the internet, directly or through a voting portal. It has been used as an alternative to physical voting polls, particularly during the pandemic. It can also be used to verify the accuracy and security of elections. Web 3.0: The Web 3.0 concept refers to the newest generation of the internet. It is highly decentralized and automated, based on AI and distributed ledger technologies such as blockchain. It allows for a more open and secure infrastructure for data storage, authentication, and interactions between devices across the web. It has the potential to be much smarter, faster, and more efficient than its predecessor Web 2.0.\
Last updated