diff --git a/.nojekyll b/.nojekyll index c1507fa0..5d32de40 100644 --- a/.nojekyll +++ b/.nojekyll @@ -1 +1 @@ -72c27ed9 \ No newline at end of file +5c8e4c92 \ No newline at end of file diff --git a/CNAME b/CNAME deleted file mode 100644 index 4e630a5d..00000000 --- a/CNAME +++ /dev/null @@ -1 +0,0 @@ -mlsysbook.ai \ No newline at end of file diff --git a/Machine-Learning-Systems.pdf b/Machine-Learning-Systems.pdf index 7e11911d..8abb03cd 100644 Binary files a/Machine-Learning-Systems.pdf and b/Machine-Learning-Systems.pdf differ diff --git a/contents/ai_for_good/ai_for_good.html b/contents/ai_for_good/ai_for_good.html index d37ae310..dffe2fc0 100644 --- a/contents/ai_for_good/ai_for_good.html +++ b/contents/ai_for_good/ai_for_good.html @@ -799,7 +799,7 @@
Widespread TinyML applications can help digitize smallholder farms to increase productivity, incomes, and resilience. The low cost of hardware and minimal connectivity requirements make solutions accessible. Projects across the developing world have shown the benefits:
Microsoft’s FarmBeats project is an end-to-end approach to enable data-driven farming by using low-cost sensors, drones, and vision and machine learning algorithms. The project aims to solve the problem of limited adoption of technology in farming due to the need for more power and internet connectivity in farms and the farmers’ limited technology savviness. The project aims to increase farm productivity and reduce costs by coupling data with farmers’ knowledge and intuition about their farms. The project has successfully enabled actionable insights from data by building artificial intelligence (AI) or machine learning (ML) models based on fused data sets.
Microsoft’s FarmBeats project is an end-to-end approach to enable data-driven farming by using low-cost sensors, drones, and vision and machine learning algorithms. The project seeks to solve the problem of limited adoption of technology in farming due to the need for more power and internet connectivity in farms and the farmers’ limited technology savviness. The project strives to increase farm productivity and reduce costs by coupling data with farmers’ knowledge and intuition about their farms. The project has successfully enabled actionable insights from data by building artificial intelligence (AI) or machine learning (ML) models based on fused data sets.
In Sub-Saharan Africa, off-the-shelf cameras and edge AI have cut cassava disease losses from 40% to 5%, protecting a staple crop (Ramcharan et al. 2017).
In Indonesia, sensors monitor microclimates across rice paddies, optimizing water usage even with erratic rains (Tirtalistyani, Murtiningrum, and Kanwar 2022).
Traditional monitoring methods are expensive, labor-intensive, and difficult to deploy remotely. The proposed TinyML solution aims to overcome these barriers. Small microphones coupled with machine learning algorithms can classify mosquitoes by species based on minute differences in wing oscillations. The TinyML software runs efficiently on low-cost microcontrollers, eliminating the need for continuous connectivity.
+Traditional monitoring methods are expensive, labor-intensive, and difficult to deploy remotely. The proposed TinyML solution overcomes these barriers. Small microphones coupled with machine learning algorithms can classify mosquitoes by species based on minute differences in wing oscillations. The TinyML software runs efficiently on low-cost microcontrollers, eliminating the need for continuous connectivity.
A collaborative research team from the University of Khartoum and the ICTP is exploring an innovative solution using TinyML. In a recent paper, they presented a low-cost device that can identify disease-spreading mosquito species through their wing beat sounds (Altayeb, Zennaro, and Rovai 2022).
Additionally, a data-centric approach can often lead to simpler models that are easier to interpret and maintain. This is because the emphasis is on the data rather than the model architecture, meaning simpler models can achieve high performance when trained on high-quality data.
-The shift towards data-centric AI represents a significant paradigm shift. By prioritizing the quality of the input data, this approach aims to improve model performance and generalization capabilities, ultimately leading to more robust and reliable AI systems. As we continue to advance in our understanding and application of AI, the data-centric approach is likely to play an important role in shaping the future of this field.
+The shift towards data-centric AI represents a significant paradigm shift. By prioritizing the quality of the input data, this approach tries to model performance and generalization capabilities, ultimately leading to more robust and reliable AI systems. As we continue to advance in our understanding and application of AI, the data-centric approach is likely to play an important role in shaping the future of this field.
Data benchmarking aims to evaluate common issues in datasets, such as identifying label errors, noisy features, representation imbalance (for example, out of the 1000 classes in Imagenet-1K, there are over 100 categories which are just types of dogs), class imbalance (where some classes have many more samples than others), whether models trained on a given dataset can generalize to out-of-distribution features, or what types of biases might exist in a given dataset (Mattson et al. 2020b). In its simplest form, data benchmarking aims to improve accuracy on a test set by removing noisy or mislabeled training samples while keeping the model architecture fixed. Recent competitions in data benchmarking have invited participants to submit novel augmentation strategies and active learning techniques.
+Data benchmarking focuses on evaluating common issues in datasets, such as identifying label errors, noisy features, representation imbalance (for example, out of the 1000 classes in Imagenet-1K, there are over 100 categories which are just types of dogs), class imbalance (where some classes have many more samples than others), whether models trained on a given dataset can generalize to out-of-distribution features, or what types of biases might exist in a given dataset (Mattson et al. 2020b). In its simplest form, data benchmarking seeks to improve accuracy on a test set by removing noisy or mislabeled training samples while keeping the model architecture fixed. Recent competitions in data benchmarking have invited participants to submit novel augmentation strategies and active learning techniques.
Data-centric techniques continue to gain attention in benchmarking, especially as foundation models are increasingly trained on self-supervised objectives. Compared to smaller datasets like Imagenet-1K, massive datasets commonly used in self-supervised learning, such as Common Crawl, OpenImages, and LAION-5B, contain higher amounts of noise, duplicates, bias, and potentially offensive data.
diff --git a/contents/conclusion/conclusion.html b/contents/conclusion/conclusion.html index 3f1483d0..d2ef4cb9 100644 --- a/contents/conclusion/conclusion.html +++ b/contents/conclusion/conclusion.html @@ -694,7 +694,7 @@Throughout this book, we have looked into the intricacies of ML systems, examining the critical components and best practices necessary to create a seamless and efficient pipeline. From data preprocessing and model training to deployment and monitoring, we have provided insights and guidance to help readers navigate the complex landscape of ML system development.
ML systems involve complex workflows, spanning various topics from data engineering to model deployment on diverse systems (Chapter 4). By providing an overview of these ML system components, we have aimed to showcase the tremendous depth and breadth of the field and expertise that is needed. Understanding the intricacies of ML workflows is crucial for practitioners and researchers alike, as it enables them to navigate the landscape effectively and develop robust, efficient, and impactful ML solutions.
-By focusing on the systems aspect of ML, we aim to bridge the gap between theoretical knowledge and practical implementation. Just as a healthy human body system allows the organs to function optimally, a well-designed ML system enables the models to consistently deliver accurate and reliable results. This book aims to empower readers with the knowledge and tools necessary to build ML systems that showcase the underlying models’ power and ensure smooth integration and operation, much like a well-functioning human body.
+By focusing on the systems aspect of ML, we aim to bridge the gap between theoretical knowledge and practical implementation. Just as a healthy human body system allows the organs to function optimally, a well-designed ML system enables the models to consistently deliver accurate and reliable results. This book’s goal is to empower readers with the knowledge and tools necessary to build ML systems that showcase the underlying models’ power and ensure smooth integration and operation, much like a well-functioning human body.
In this context, using KWS as an example, we can break each of the steps out as follows:
Identifying the Problem: At its core, KWS aims to detect specific keywords amidst ambient sounds and other spoken words. The primary problem is to design a system that can recognize these keywords with high accuracy, low latency, and minimal false positives or negatives, especially when deployed on devices with limited computational resources.
Identifying the Problem: At its core, KWS detects specific keywords amidst ambient sounds and other spoken words. The primary problem is to design a system that can recognize these keywords with high accuracy, low latency, and minimal false positives or negatives, especially when deployed on devices with limited computational resources.
Setting Clear Objectives: The objectives for a KWS system might include:
Embedded AI, the integration of AI algorithms directly into hardware devices, naturally gains from deep learning capabilities. Combining deep learning algorithms and embedded systems has laid the groundwork for intelligent, autonomous devices capable of advanced on-device data processing and analysis. Deep learning aids in extracting complex patterns and information from input data, which is essential in developing smart embedded systems, from household appliances to industrial machinery. This collaboration aims to usher in a new era of intelligent, interconnected devices that can learn and adapt to user behavior and environmental conditions, optimizing performance and offering unprecedented convenience and efficiency.
+Embedded AI, the integration of AI algorithms directly into hardware devices, naturally gains from deep learning capabilities. Combining deep learning algorithms and embedded systems has laid the groundwork for intelligent, autonomous devices capable of advanced on-device data processing and analysis. Deep learning aids in extracting complex patterns and information from input data, which is essential in developing smart embedded systems, from household appliances to industrial machinery. This collaboration ushers in a new era of intelligent, interconnected devices that can learn and adapt to user behavior and environmental conditions, optimizing performance and offering unprecedented convenience and efficiency.
A neural network receives an input, performs a calculation, and produces a prediction. The prediction is determined by the calculations performed within the sets of perceptrons found between the input and output layers. These calculations depend primarily on the input and the weights. Since you do not have control over the input, the objective during training is to adjust the weights in such a way that the output of the network provides the most accurate prediction.
-The training process involves several key steps, beginning with the forward pass, where the existing weights of the network are used to calculate the output for a given input. This output is then compared to the true target values to calculate an error, which measures how well the network’s prediction matches the expected outcome. Following this, a backward pass is performed. This involves using the error to make adjustments to the weights of the network through a process called backpropagation. This adjustment aims to reduce the error in subsequent predictions. The cycle of forward pass, error calculation, and backward pass is repeated iteratively. This process continues until the network’s predictions are sufficiently accurate or a predefined number of iterations is reached, effectively minimizing the loss function used to measure the error.
+The training process involves several key steps, beginning with the forward pass, where the existing weights of the network are used to calculate the output for a given input. This output is then compared to the true target values to calculate an error, which measures how well the network’s prediction matches the expected outcome. Following this, a backward pass is performed. This involves using the error to make adjustments to the weights of the network through a process called backpropagation. This adjustment reduces the error in subsequent predictions. The cycle of forward pass, error calculation, and backward pass is repeated iteratively. This process continues until the network’s predictions are sufficiently accurate or a predefined number of iterations is reached, effectively minimizing the loss function used to measure the error.
The forward pass is the initial phase where data moves through the network from the input to the output layer. At the start of training, the network’s weights are randomly initialized, setting the initial conditions for learning. During the forward pass, each layer performs specific computations on the input data using these weights and biases, and the results are then passed to the subsequent layer. The final output of this phase is the network’s prediction. This prediction is compared to the actual target values present in the dataset to calculate the loss, which can be thought of as the difference between the predicted outputs and the target values. The loss quantifies the network’s performance at this stage, providing a crucial metric for the subsequent adjustment of weights during the backward pass.
@@ -1003,7 +1003,7 @@GANs consist of two networks, a generator and a discriminator, trained simultaneously through adversarial training (Goodfellow et al. 2020). The generator produces data that tries to mimic the real data distribution, while the discriminator aims to distinguish between real and generated data. GANs are widely used in image generation, style transfer, and data augmentation.
+GANs consist of two networks, a generator and a discriminator, trained simultaneously through adversarial training (Goodfellow et al. 2020). The generator produces data that tries to mimic the real data distribution, while the discriminator distinguishes between real and generated data. GANs are widely used in image generation, style transfer, and data augmentation.
In embedded settings, GANs could be used for on-device data augmentation to improve the training of models directly on the embedded device, enabling continual learning and adaptation to new data without the need for cloud computing resources.
diff --git a/contents/efficient_ai/efficient_ai.html b/contents/efficient_ai/efficient_ai.html index 0430f7de..516688bb 100644 --- a/contents/efficient_ai/efficient_ai.html +++ b/contents/efficient_ai/efficient_ai.html @@ -666,7 +666,7 @@Efficient hardware for inference speeds up the process, saves energy, extends battery life, and can operate in real-time conditions. As AI continues to be integrated into myriad applications, from smart cameras to voice assistants, the role of optimized hardware will only become more prominent. By leveraging these specialized hardware components, developers and engineers can bring the power of AI to devices and situations that were previously unthinkable.
Machine learning, and especially deep learning, involves enormous amounts of computation. Models can have millions to billions of parameters, often trained on vast datasets. Every operation, every multiplication or addition, demands computational resources. Therefore, the precision of the numbers used in these operations can significantly impact the computational speed, energy consumption, and memory requirements. This is where the concept of efficient numerics comes into play.
In many cases, machine learning can have a relatively high barrier of entry compared to other fields. To successfully train and deploy models, one needs to have a critical understanding of a variety of disciplines, from data science (data processing, data cleaning), model structures (hyperparameter tuning, neural network architecture), hardware (acceleration, parallel processing), and more depending on the problem at hand. The complexity of these problems has led to the introduction of frameworks such as AutoML, which aims to make “Machine learning available for non-Machine Learning experts” and to “automate research in machine learning.” They have constructed AutoWEKA, which aids in the complex process of hyperparameter selection, and Auto-sklearn and Auto-pytorch, an extension of AutoWEKA into the popular sklearn and PyTorch Libraries.
+In many cases, machine learning can have a relatively high barrier of entry compared to other fields. To successfully train and deploy models, one needs to have a critical understanding of a variety of disciplines, from data science (data processing, data cleaning), model structures (hyperparameter tuning, neural network architecture), hardware (acceleration, parallel processing), and more depending on the problem at hand. The complexity of these problems has led to the introduction of frameworks such as AutoML, which tries to make “Machine learning available for non-Machine Learning experts” and to “automate research in machine learning.” They have constructed AutoWEKA, which aids in the complex process of hyperparameter selection, and Auto-sklearn and Auto-pytorch, an extension of AutoWEKA into the popular sklearn and PyTorch Libraries.
While these efforts to automate parts of machine learning tasks are underway, others have focused on making machine learning models easier by deploying no-code/low-code machine learning, utilizing a drag-and-drop interface with an easy-to-navigate user interface. Companies such as Apple, Google, and Amazon have already created these easy-to-use platforms to allow users to construct machine learning models that can integrate into their ecosystem.
These steps to remove barriers to entry continue to democratize machine learning, make it easier for beginners to access, and simplify workflow for experts.
Transfer learning is the practice of using knowledge gained from a pre-trained model to train and improve the performance of a model for a different task. For example, models such as MobileNet and ResNet are trained on the ImageNet dataset. To do so, one may freeze the pre-trained model, utilizing it as a feature extractor to train a much smaller model built on top of the feature extraction. One can also fine-tune the entire model to fit the new task. Machine learning frameworks make it easy to load pre-trained models, freeze specific layers, and train custom layers on top. They simplify this process by providing intuitive APIs and easy access to large repositories of pre-trained models.
-Transfer learning has challenges, such as the modified model’s inability to conduct its original tasks after transfer learning. Papers such as “Learning without Forgetting” by Z. Li and Hoiem (2018) aims to address these challenges and have been implemented in modern machine learning platforms.
+Transfer learning has challenges, such as the modified model’s inability to conduct its original tasks after transfer learning. Papers such as “Learning without Forgetting” by Z. Li and Hoiem (2018) try to address these challenges and have been implemented in modern machine learning platforms.
Neuromorphic computing, which aims to emulate biological neural systems for efficient ML inference, can use analog circuits to implement the key components and behaviors of brains. For example, researchers have designed analog circuits to model neurons and synapses using capacitors, transistors, and operational amplifiers (Hazan and Ezra Tsur 2021). The capacitors can exhibit the spiking dynamics of biological neurons, while the amplifiers and transistors provide a weighted summation of inputs to mimic dendrites. Variable resistor technologies like memristors can realize analog synapses with spike-timing-dependent plasticity, which can strengthen or weaken connections based on spiking activity.
+Neuromorphic computing, which emulates biological neural systems for efficient ML inference, can use analog circuits to implement the key components and behaviors of brains. For example, researchers have designed analog circuits to model neurons and synapses using capacitors, transistors, and operational amplifiers (Hazan and Ezra Tsur 2021). The capacitors can exhibit the spiking dynamics of biological neurons, while the amplifiers and transistors provide a weighted summation of inputs to mimic dendrites. Variable resistor technologies like memristors can realize analog synapses with spike-timing-dependent plasticity, which can strengthen or weaken connections based on spiking activity.
Startups like SynSense have developed analog neuromorphic chips containing these biomimetic components (Bains 2020). This analog approach results in low power consumption and high scalability for edge devices versus complex digital SNN implementations.
In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be seamlessly integrated into our environments, becoming an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.
+In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. This was succintly captured in the paper he wrote on “The Computer for the 21st Century” (Figure 1.1). He envisioned a future where computing would be seamlessly integrated into our environments, becoming an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.
Data poisoning is a pressing concern for secure on-device learning since data at the endpoint cannot be easily monitored in real-time. If models are allowed to adapt on their own, then we run the risk of the device acting maliciously. However, continued research in adversarial ML is needed to develop robust solutions to detect and mitigate such data attacks.
DevOps has its roots in the Agile movement, which began in the early 2000s. Agile provided the foundation for a more collaborative approach to software development and emphasized small, iterative releases. However, Agile primarily focuses on collaboration between development teams. As Agile methodologies became more popular, organizations realized the need to extend this collaboration to operations teams.
The siloed nature of development and operations teams often led to inefficiencies, conflicts, and delays in software delivery. This need for better collaboration and integration between these teams led to the DevOps movement. DevOps can be seen as an extension of the Agile principles, including operations teams.
-The key principles of DevOps include collaboration, automation, continuous integration, delivery, and feedback. DevOps focuses on automating the entire software delivery pipeline, from development to deployment. It aims to improve the collaboration between development and operations teams, utilizing tools like Jenkins, Docker, and Kubernetes to streamline the development lifecycle.
+The key principles of DevOps include collaboration, automation, continuous integration, delivery, and feedback. DevOps focuses on automating the entire software delivery pipeline, from development to deployment. It improves the collaboration between development and operations teams, utilizing tools like Jenkins, Docker, and Kubernetes to streamline the development lifecycle.
While Agile and DevOps share common principles around collaboration and feedback, DevOps specifically targets integrating development and IT operations - expanding Agile beyond just development teams. It introduces practices and tools to automate software delivery and improve the speed and quality of software releases.
MLOps, on the other hand, stands for Machine Learning Operations, and it extends the principles of DevOps to the ML lifecycle. MLOps aims to automate and streamline the end-to-end ML lifecycle, from data preparation and model development to deployment and monitoring. The main focus of MLOps is to facilitate collaboration between data scientists, data engineers, and IT operations and to automate the deployment, monitoring, and management of ML models. Some key factors led to the rise of MLOps.
+MLOps, on the other hand, stands for Machine Learning Operations, and it extends the principles of DevOps to the ML lifecycle. MLOps automates and streamlines the end-to-end ML lifecycle, from data preparation and model development to deployment and monitoring. The main focus of MLOps is to facilitate collaboration between data scientists, data engineers, and IT operations and to automate the deployment, monitoring, and management of ML models. Some key factors led to the rise of MLOps.
Edge Impulse is an end-to-end development platform for creating and deploying machine learning models onto edge devices such as microcontrollers and small processors. It aims to make embedded machine learning more accessible to software developers through its easy-to-use web interface and integrated tools for data collection, model development, optimization, and deployment. Its key capabilities include the following:
+Edge Impulse is an end-to-end development platform for creating and deploying machine learning models onto edge devices such as microcontrollers and small processors. It makes embedded machine learning more accessible to software developers through its easy-to-use web interface and integrated tools for data collection, model development, optimization, and deployment. Its key capabilities include the following:
Beyond the accessibility of the platform itself, the Edge Impulse team has expanded the knowledge base of the embedded ML ecosystem. The platform lends itself to academic environments, having been used in online courses and on-site workshops globally. Numerous case studies featuring industry and research use cases have been published, most notably Oura Ring, which uses ML to identify sleep patterns. The team has made repositories open source on GitHub, facilitating community growth. Users can also make projects public to share techniques and download libraries to share via Apache. Organization-level access enables collaboration on workflows.
-Overall, Edge Impulse is uniquely comprehensive and integrateable for developer workflows. Larger platforms like Google and Microsoft focus more on cloud versus embedded systems. TinyMLOps frameworks such as Neuton AI and Latent AI offer some functionality but lack Edge Impulse’s end-to-end capabilities. TensorFlow Lite Micro is the standard inference engine due to flexibility, open source status, and TensorFlow integration, but it uses more memory and storage than Edge Impulse’s EON Compiler. Other platforms need to be updated, academic-focused, or more versatile. In summary, Edge Impulse aims to streamline and scale embedded ML through an accessible, automated platform.
+Overall, Edge Impulse is uniquely comprehensive and integrateable for developer workflows. Larger platforms like Google and Microsoft focus more on cloud versus embedded systems. TinyMLOps frameworks such as Neuton AI and Latent AI offer some functionality but lack Edge Impulse’s end-to-end capabilities. TensorFlow Lite Micro is the standard inference engine due to flexibility, open source status, and TensorFlow integration, but it uses more memory and storage than Edge Impulse’s EON Compiler. Other platforms need to be updated, academic-focused, or more versatile. In summary, Edge Impulse streamlines and scale embedded ML through an accessible, automated platform.
Traditional MLOps frameworks are insufficient for integrating continuous therapeutic monitoring (CTM) and AI in clinical settings for a few key reasons:
MLOps focuses on the ML model lifecycle—training, deployment, monitoring. But healthcare involves coordinating multiple human stakeholders—patients and clinicians—not just models.
MLOps aims to automate IT system monitoring and management. However, optimizing patient health requires personalized care and human oversight, not just automation.
MLOps automates IT system monitoring and management. However, optimizing patient health requires personalized care and human oversight, not just automation.
CTM and healthcare delivery are complex sociotechnical systems with many moving parts. MLOps doesn’t provide a framework for coordinating human and AI decision-making.
Ethical considerations regarding healthcare AI require human judgment, oversight, and accountability. MLOps frameworks lack processes for ethical oversight.
Patient health data is highly sensitive and regulated. MLOps alone doesn’t ensure the handling of protected health information to privacy and regulatory standards.
Clinical validation of AI-guided treatment plans is essential for provider adoption. MLOps doesn’t incorporate domain-specific evaluation of model recommendations.
Optimizing healthcare metrics like patient outcomes requires aligning stakeholder incentives and workflows, which pure tech-focused MLOps overlooks.
Thus, effectively integrating AI/ML and CTM in clinical practice requires more than just model and data pipelines; it requires coordinating complex human-AI collaborative decision-making, which ClinAIOps aims to address via its multi-stakeholder feedback loops.
+Thus, effectively integrating AI/ML and CTM in clinical practice requires more than just model and data pipelines; it requires coordinating complex human-AI collaborative decision-making, which ClinAIOps addresses via its multi-stakeholder feedback loops.
The ClinAIOps framework, shown in Figure 13.8, provides these mechanisms through three feedback loops. The loops are useful for coordinating the insights from continuous physiological monitoring, clinician expertise, and AI guidance via feedback loops, enabling data-driven precision medicine while maintaining human accountability. ClinAIOps provides a model for effective human-AI symbiosis in healthcare: the patient is at the center, providing health challenges and goals that inform the therapy regimen; the clinician oversees this regimen, giving inputs for adjustments based on continuous monitoring data and health reports from the patient; whereas AI developers play a crucial role by creating systems that generate alerts for therapy updates, which the clinician then vets.
@@ -1762,7 +1762,7 @@The hypertension case clearly shows the need to look beyond training and deploying a performant ML model to consider the entire human-AI sociotechnical system. This is the key gap ClinAIOps aims to address over traditional MLOps. Traditional MLOps is overly tech-focused on automating ML model development and deployment, while ClinAIOps incorporates clinical context and human-AI coordination through multi-stakeholder feedback loops.
+The hypertension case clearly shows the need to look beyond training and deploying a performant ML model to consider the entire human-AI sociotechnical system. This is the key gap ClinAIOps addresses over traditional MLOps. Traditional MLOps is overly tech-focused on automating ML model development and deployment, while ClinAIOps incorporates clinical context and human-AI coordination through multi-stakeholder feedback loops.
Table 13.3 compares them. This table highlights how, when MLOps is implemented, we need to consider more than just ML models.
Model pruning is a technique in machine learning that aims to reduce the size and complexity of a neural network model while maintaining its predictive capabilities as much as possible. The goal of model pruning is to remove redundant or non-essential components of the model, including connections between neurons, individual neurons, or even entire layers of the network.
+Model pruning is a technique in machine learning that reduces the size and complexity of a neural network model while maintaining its predictive capabilities as much as possible. The goal of model pruning is to remove redundant or non-essential components of the model, including connections between neurons, individual neurons, or even entire layers of the network.
This process typically involves analyzing the machine learning model to identify and remove weights, nodes, or layers that have little impact on the model’s outputs. By selectively pruning a model in this way, the total number of parameters can be reduced significantly without substantial declines in model accuracy. The resulting compressed model requires less memory and computational resources to train and run while enabling faster inference times.
Model pruning is especially useful when deploying machine learning models to devices with limited compute resources, such as mobile phones or TinyML systems. The technique facilitates the deployment of larger, more complex models on these devices by reducing their resource demands. Additionally, smaller models require less data to generalize well and are less prone to overfitting. By providing an efficient way to simplify models, model pruning has become a vital technique for optimizing neural networks in machine learning.
There are several common pruning techniques used in machine learning, these include structured pruning, unstructured pruning, iterative pruning, bayesian pruning, and even random pruning. In addition to pruning the weights, one can also prune the activations. Activation pruning specifically targets neurons or filters that activate rarely or have overall low activation. There are numerous other methods, such as sensitivity and movement pruning. For a comprehensive list of methods, the reader is encouraged to read the following paper: “A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations” (2023).
@@ -1325,7 +1325,7 @@Numeric encoding, the art of transmuting numbers into a computer-amenable format, and their subsequent storage are critical for computational efficiency. For instance, floating-point numbers might be encoded using the IEEE 754 standard, which apportions bits among sign, exponent, and fraction components, thereby enabling the representation of a vast array of values with a single format. There are a few new IEEE floating point formats that have been defined specifically for AI workloads:
Zero-shot quantization refers to the process of converting a full-precision deep learning model directly into a low-precision, quantized model without the need for any retraining or fine-tuning on the quantized model. The primary advantage of this approach is its efficiency, as it eliminates the often time-consuming and resource-intensive process of retraining a model post-quantization. By leveraging techniques that anticipate and minimize quantization errors, zero-shot quantization aims to maintain the model’s original accuracy even after reducing its numerical precision. It is particularly useful for Machine Learning as a Service (MLaaS) providers aiming to expedite the deployment of their customer’s workloads without having to access their datasets.
+Zero-shot quantization refers to the process of converting a full-precision deep learning model directly into a low-precision, quantized model without the need for any retraining or fine-tuning on the quantized model. The primary advantage of this approach is its efficiency, as it eliminates the often time-consuming and resource-intensive process of retraining a model post-quantization. By leveraging techniques that anticipate and minimize quantization errors, zero-shot quantization maintains the model’s original accuracy even after reducing its numerical precision. It is particularly useful for Machine Learning as a Service (MLaaS) providers aiming to expedite the deployment of their customer’s workloads without having to access their datasets.
Level of Effectiveness: Attackers aim to replicate the model’s decision-making capabilities rather than focus on the precise parameter values. This is done through understanding the overall behavior of the model. Consider a scenario where an attacker wants to copy the behavior of an image classification model. By analyzing the model’s decision boundaries, the attack tunes its model to reach an effectiveness comparable to the original model. This could entail analyzing 1) the confusion matrix to understand the balance of prediction metrics (true positive, true negative, false positive, false negative) and 2) other performance metrics, such as F1 score and precision, to ensure that the two models are comparable.
Prediction Consistency: The attacker tries to align their model’s prediction patterns with the target model’s. This involves matching prediction outputs (both positive and negative) on the same set of inputs and ensuring distributional consistency across different classes. For instance, consider a natural language processing (NLP) model that generates sentiment analysis for move reviews (labels reviews as positive, neutral, or negative). The attacker will try to fine-tune their model to match the prediction of the original models on the same set of movie reviews. This includes ensuring that the model makes the same mistakes (mispredictions) that the targeted model makes.
Prediction Consistency: The attacker tries to align their model’s prediction patterns with the target model’s. This involves matching prediction outputs (both positive and negative) on the same set of inputs and ensuring distributional consistency across different classes. For instance, consider a natural language processing (NLP) model that generates sentiment analysis for movie reviews (labels reviews as positive, neutral, or negative). The attacker will try to fine-tune their model to match the prediction of the original models on the same set of movie reviews. This includes ensuring that the model makes the same mistakes (mispredictions) that the targeted model makes.
For ML systems, consequences include impaired model accuracy, denial of service, extraction of private training data or model parameters, and reverse engineering of model architectures. Attackers could use fault injection to force misclassifications, disrupt autonomous systems, or steal intellectual property.
-For example, in (Breier et al. 2018), the authors successfully injected a fault attack into a deep neural network deployed on a microcontroller. They used a laser to heat specific transistors, forcing them to switch states. In one instance, they used this method to attack a ReLU activation function, resulting in the function always outputting a value of 0, regardless of the input. In the assembly code in Figure 14.2, the attack caused the executing program always to skip the jmp
end instruction on line 6. This means that HiddenLayerOutput[i]
is always set to 0, overwriting any values written to it on lines 4 and 5. As a result, the targeted neurons are rendered inactive, resulting in misclassifications.
For example, in (Breier et al. 2018), the authors successfully injected a fault attack into a deep neural network deployed on a microcontroller. They used a laser to heat specific transistors, forcing them to switch states. In one instance, they used this method to attack a ReLU activation function, resulting in the function always outputting a value of 0, regardless of the input. In the assembly code in Figure 14.2, the attack caused the executing program always to skip the jmp end
instruction on line 6. This means that HiddenLayerOutput[i]
is always set to 0, overwriting any values written to it on lines 4 and 5. As a result, the targeted neurons are rendered inactive, resulting in misclassifications.
Baby Monitors: Many WiFi-enabled baby monitors have been found to have unsecured interfaces for remote access. This allowed attackers to gain live audio and video feeds from people’s homes, representing a major privacy violation.
Pacemakers: Interface vulnerabilities were discovered in some pacemakers that could allow attackers to manipulate cardiac functions if exploited. This presents a potentially life-threatening scenario.
Smart Lightbulbs: A researcher found he could access unencrypted data from smart lightbulbs via a debug interface, including WiFi credentials, allowing him to gain access to the connected network (Greengard 2015).
Smart Cars: If left unsecured, The OBD-II diagnostic port has been shown to provide an attack vector into automotive systems. Researchers could use it to control brakes and other components (Miller and Valasek 2015).
Smart Cars: If left unsecured, The OBD-II diagnostic port has been shown to provide an attack vector into automotive systems. Attackers could use it to control brakes and other components (Miller and Valasek 2015).
Privacy and security concerns have also risen with the public use of generative AI models, including OpenAI’s GPT4 and other LLMs. ChatGPT, in particular, has been discussed more recently about Privacy, given all the personal information collected from ChatGPT users. In June, a class action lawsuit was filed against ChatGPT due to concerns that it was trained on proprietary medical and personal information without proper permissions or consent. As a result of these privacy concerns, many companies have prohibited their employees from accessing ChatGPT, and uploading private, company related information to the chatbot. Further, ChatGPT is susceptible to prompt injection and other security attacks that could compromise the privacy of the proprietary data upon which it was trained.
+Privacy and security concerns have also risen with the public use of generative AI models, including OpenAI’s GPT4 and other LLMs. ChatGPT, in particular, has been discussed more recently about Privacy, given all the personal information collected from ChatGPT users. In June 2023, a class action lawsuit was filed against ChatGPT due to concerns that it was trained on proprietary medical and personal information without proper permissions or consent. As a result of these privacy concerns, many companies have prohibited their employees from accessing ChatGPT, and uploading private, company related information to the chatbot. Further, ChatGPT is susceptible to prompt injection and other security attacks that could compromise the privacy of the proprietary data upon which it was trained.
While ChatGPT has instituted protections to prevent people from accessing private and ethically questionable information, several individuals have successfully bypassed these protections through prompt injection and other security attacks. As demonstrated in Figure 14.9, users can bypass ChatGPT protections to mimic the tone of a “deceased grandmother” to learn how to bypass a web application firewall (Gupta et al. 2023).
@@ -1741,7 +1741,7 @@Federated Learning (FL) is a type of machine learning in which a model is built and distributed across multiple devices or servers while keeping the training data localized. It was previously discussed in the Model Optimizations chapter. Still, we will recap it here briefly to complete it and focus on things that pertain to this chapter.
-FL aims to train machine learning models across decentralized networks of devices or systems while keeping all training data localized. Figure 14.12 illustrates this process: each participating device leverages its local data to calculate model updates, which are then aggregated to build an improved global model. However, the raw training data is never directly shared, transferred, or compiled. This privacy-preserving approach allows for the joint development of ML models without centralizing the potentially sensitive training data in one place.
+FL trains machine learning models across decentralized networks of devices or systems while keeping all training data localized. Figure 14.12 illustrates this process: each participating device leverages its local data to calculate model updates, which are then aggregated to build an improved global model. However, the raw training data is never directly shared, transferred, or compiled. This privacy-preserving approach allows for the joint development of ML models without centralizing the potentially sensitive training data in one place.
Some researchers demonstrate a real-life example of machine unlearning approaches applied to SOTA machine learning models through training an LLM, LLaMA2-7b, to unlearn any references to Harry Potter (Eldan and Russinovich 2023). Though this model took 184K GPU hours to pre-train, it only took 1 GPU hour of fine-tuning to erase the model’s ability to generate or recall Harry Potter-related content without noticeably compromising the accuracy of generating content unrelated to Harry Potter. Figure 14.13 demonstrates how the model output changes before (Llama-7b-chat-hf column) and after (Finetuned Llama-b column) unlearning has occurred.
+Some researchers have demonstrated a real-life example of machine unlearning approaches applied to SOTA machine learning models through training an LLM, LLaMA2-7b, to unlearn any references to Harry Potter (Eldan and Russinovich 2023). Though this model took 184K GPU hours to pre-train, it only took 1 GPU hour of fine-tuning to erase the model’s ability to generate or recall Harry Potter-related content without noticeably compromising the accuracy of generating content unrelated to Harry Potter. Figure 14.13 demonstrates how the model output changes before (Llama-7b-chat-hf column) and after (Finetuned Llama-b column) unlearning has occurred.
Deep learning models have previously been shown to be vulnerable to adversarial attacks, in which the attacker generates adversarial data similar to the original training data, where a human cannot tell the difference between the real and fabricated data. The adversarial data results in the model outputting incorrect predictions, which could have detrimental consequences in various applications, including healthcare diagnosis predictions. Machine unlearning has been used to unlearn the influence of adversarial data to prevent these incorrect predictions from occurring and causing any harm
+Deep learning models have previously been shown to be vulnerable to adversarial attacks, in which the attacker generates adversarial data similar to the original training data, where a human cannot tell the difference between the real and fabricated data. The adversarial data results in the model outputting incorrect predictions, which could have detrimental consequences in various applications, including healthcare diagnosis predictions. Machine unlearning has been used to unlearn the influence of adversarial data to prevent these incorrect predictions from occurring and causing any harm.
Result Encryption: The result \(E(xy)\) remains encrypted and can only be decrypted by someone with the corresponding private key to reveal the actual product \(xy\).
Only authorized parties with the private key can decrypt the final outputs, protecting the intermediate state. However, noise accumulates with each operation, preventing further computation without decryption.
-Beyond healthcare, homomorphic encryption enables confidential computing for applications like financial fraud detection, insurance analytics, genetics research, and more. It offers an alternative to techniques like multipartymultiparty computation and TEEs. Ongoing research aims to improve the efficiency and capabilities.
+Beyond healthcare, homomorphic encryption enables confidential computing for applications like financial fraud detection, insurance analytics, genetics research, and more. It offers an alternative to techniques like multiparty computation and TEEs. Ongoing research improves the efficiency and capabilities.
Tools like HElib, SEAL, and TensorFlow HE provide libraries for exploring implementing homomorphic encryption in real-world machine learning pipelines.
For many real-time and embedded applications, fully homomorphic encryption remains impractical for the following reasons.
-Computational Overhead: Homomorphic encryption imposes very high computational overheads, often resulting in slowdowns of over 100x for real-world ML applications. This makes it impractical for many time-sensitive or resource-constrained uses. Optimized hardware and parallelization can help but not eliminate this issue.
+Computational Overhead: Homomorphic encryption imposes very high computational overheads, often resulting in slowdowns of over 100x for real-world ML applications. This makes it impractical for many time-sensitive or resource-constrained uses. Optimized hardware and parallelization can alleviate but not eliminate this issue.
Complexity of Implementation The sophisticated algorithms require deep expertise in cryptography to be implemented correctly. Nuances like format compatibility with floating point ML models and scalable key management pose hurdles. This complexity hinders widespread practical adoption.
Algorithmic Limitations: Current schemes restrict the functions and depth of computations supported, limiting the models and data volumes that can be processed. Ongoing research is pushing these boundaries, but restrictions remain.
Hardware Acceleration: Homomorphic encryption requires specialized hardware, such as secure processors or coprocessors with TEEs, which adds design and infrastructure costs.
@@ -1885,7 +1885,7 @@The overarching goal of MPC is to enable different parties to jointly compute a function over their inputs while keeping those inputs private. For example, two organizations may want to collaborate on training a machine learning model by combining their respective data sets. Still, they cannot directly reveal that data due to Privacy or confidentiality constraints. MPC aims to provide protocols and techniques that allow them to achieve the benefits of pooled data for model accuracy without compromising the privacy of each organization’s sensitive data.
+The overarching goal of Multi-Party Communication (MPC) is to enable different parties to jointly compute a function over their inputs while keeping those inputs private. For example, two organizations may want to collaborate on training a machine learning model by combining their respective data sets. Still, they cannot directly reveal that data due to Privacy or confidentiality constraints. MPC provides protocols and techniques that allow them to achieve the benefits of pooled data for model accuracy without compromising the privacy of each organization’s sensitive data.
At a high level, MPC works by carefully splitting the computation into parts that each party can execute independently using their private input. The results are then combined to reveal only the final output of the function and nothing about the intermediate values. Cryptographic techniques are used to guarantee that the partial results remain private provably.
Let’s take a simple example of an MPC protocol. One of the most basic MPC protocols is the secure addition of two numbers. Each party splits its input into random shares that are secretly distributed. They exchange the shares and locally compute the sum of the shares, which reconstructs the final sum without revealing the individual inputs. For example, if Alice has input x and Bob has input y:
Alice computes \(x_2 + y_1 = s_1\), Bob computes \(x_1 + y_2 = s_2\)
\(s_1 + s_2 = x + y\) is the final sum, without revealing \(x\) or \(y\).
Alice’s and Bob’s individual inputs (\(x\) and \(y\)) remain private, and each party only reveals one number associated with their original inputs. The random spits ensure no information about the original numbers disclosed
+Alice’s and Bob’s individual inputs (\(x\) and \(y\)) remain private, and each party only reveals one number associated with their original inputs. The random outputs ensure that no information about the original numbers disclosed.
Secure Comparison: Another basic operation is a secure comparison of two numbers, determining which is greater than the other. This can be done using techniques like Yao’s Garbled Circuits, where the comparison circuit is encrypted to allow joint evaluation of the inputs without leaking them.
Secure Matrix Multiplication: Matrix operations like multiplication are essential for machine learning. MPC techniques like additive secret sharing can be used to split matrices into random shares, compute products on the shares, and then reconstruct the result.
Secure Model Training: Distributed machine learning training algorithms like federated averaging can be made secure using MPC. Model updates computed on partitioned data at each node are secretly shared between nodes and aggregated to train the global model without exposing individual updates.
@@ -1918,7 +1918,7 @@MPC systems require extensive communication and interaction between parties to compute on shares/ciphertexts jointly.
As a result, MPC protocols can slow down computations by 3-4 orders of magnitude compared to plain implementations. This becomes prohibitively expensive for large datasets and models. Therefore, training machine learning models on encrypted data using MPC remains infeasible today for realistic dataset sizes due to the overhead. Clever optimizations and approximations are needed to make MPC practical.
-Ongoing MPC research aims to close this efficiency gap through cryptographic advances, new algorithms, trusted hardware like SGX enclaves, and leveraging accelerators like GPUs/TPUs. However, in the foreseeable future, some degree of approximation and performance tradeoff is needed to scale MPC to meet the demands of real-world machine learning systems.
+Ongoing MPC research closes this efficiency gap through cryptographic advances, new algorithms, trusted hardware like SGX enclaves, and leveraging accelerators like GPUs/TPUs. However, in the foreseeable future, some degree of approximation and performance tradeoff is needed to scale MPC to meet the demands of real-world machine learning systems.
While synthetic data aims to remove any evidence of the original dataset, privacy leakage is still a risk since the synthetic data mimics the original data. The statistical information and distribution are similar, if not the same, between the original and synthetic data. By resampling from the distribution, adversaries may still be able to recover the original training samples. Due to their inherent learning processes and complexities, neural networks might accidentally reveal sensitive information about the original training data.
+While synthetic data tries to remove any evidence of the original dataset, privacy leakage is still a risk since the synthetic data mimics the original data. The statistical information and distribution are similar, if not the same, between the original and synthetic data. By resampling from the distribution, adversaries may still be able to recover the original training samples. Due to their inherent learning processes and complexities, neural networks might accidentally reveal sensitive information about the original training data.
A core challenge with synthetic data is the potential gap between synthetic and real-world data distributions. Despite advancements in generative modeling techniques, synthetic data may only partially capture real data’s complexity, diversity, and nuanced patterns. This can limit the utility of synthetic data for robustly training machine learning models. Rigorously evaluating synthetic data quality through adversary methods and comparing model performance to real data benchmarks helps assess and improve fidelity. However, inherently, synthetic data remains an approximation.
Another critical concern is the privacy risks of synthetic data. Generative models may leak identifiable information about individuals in the training data, which could enable reconstruction of private information. Emerging adversarial attacks demonstrate the challenges in preventing identity leakage from synthetic data generation pipelines. Techniques like differential Privacy can help safeguard Privacy but come with tradeoffs in data utility. There is an inherent tension between producing useful synthetic data and fully protecting sensitive training data, which must be balanced.
-Additional pitfalls of synthetic data include amplified biases, labeling difficulties, the computational overhead of training generative models, storage costs, and failure to account for out-of-distribution novel data. While these are secondary to the core synthetic-real gap and privacy risks, they remain important considerations when evaluating the suitability of synthetic data for particular machine-learning tasks. As with any technique, the advantages of synthetic data come with inherent tradeoffs and limitations that require thoughtful mitigation strategies.
+Additional pitfalls of synthetic data include amplified biases, mislabeling, the computational overhead of training generative models, storage costs, and failure to account for out-of-distribution novel data. While these are secondary to the core synthetic-real gap and privacy risks, they remain important considerations when evaluating the suitability of synthetic data for particular machine-learning tasks. As with any technique, the advantages of synthetic data come with inherent tradeoffs and limitations that require thoughtful mitigation strategies.
For cloud-based machine learning, explainability techniques can leverage significant compute resources, enabling complex methods like SHAP values or sampling-based approaches to interpret model behaviors. For example, Microsoft’s InterpretML toolkit provides explainability techniques tailored for cloud environments.
However, edge ML operates on resource-constrained devices, requiring more lightweight explainability methods that can run locally without excessive latency. Techniques like LIME (Ribeiro, Singh, and Guestrin 2016) approximate model explanations using linear models or decision trees to avoid expensive computations, which makes them ideal for resource-constrained devices. However, LIME requires training hundreds to even thousands of models to generate good explanations, which is often infeasible given edge computing constraints. In contrast, saliency-based methods are often much faster in practice, only requiring a single forward pass through the network to estimate feature importance. This greater efficiency makes such methods better suited to edge devices with limited compute resources where low-latency explanations are critical.
-Given tiny hardware capabilities, embedded systems pose the most significant challenges for explainability. More compact models and limited data make inherent model transparency easier. Explaining decisions may not be feasible on high-size and power-optimized microcontrollers. DARPA’s Transparent Computing program aims to develop extremely low overhead explainability, especially for TinyML devices like sensors and wearables.
+Given tiny hardware capabilities, embedded systems pose the most significant challenges for explainability. More compact models and limited data make inherent model transparency easier. Explaining decisions may not be feasible on high-size and power-optimized microcontrollers. DARPA’s Transparent Computing program tries to develop extremely low overhead explainability, especially for TinyML devices like sensors and wearables.
One prominent category of adversarial attacks is gradient-based attacks. These attacks leverage the gradients of the ML model’s loss function to craft adversarial examples. The Fast Gradient Sign Method (FGSM) is a well-known technique in this category. FGSM perturbs the input data by adding small noise in the gradient direction, aiming to maximize the model’s prediction error. FGSM can quickly generate adversarial examples, as shown in Figure 17.19, by taking a single step in the gradient direction.
Another variant, the Projected Gradient Descent (PGD) attack, extends FGSM by iteratively applying the gradient update step, allowing for more refined and powerful adversarial examples. The Jacobian-based Saliency Map Attack (JSMA) is another gradient-based approach that identifies the most influential input features and perturbs them to create adversarial examples.
Optimization-based Attacks
-These attacks formulate the generation of adversarial examples as an optimization problem. The Carlini and Wagner (C&W) attack is a prominent example in this category. It aims to find the smallest perturbation that can cause misclassification while maintaining the perceptual similarity to the original input. The C&W attack employs an iterative optimization process to minimize the perturbation while maximizing the model’s prediction error.
+These attacks formulate the generation of adversarial examples as an optimization problem. The Carlini and Wagner (C&W) attack is a prominent example in this category. It finds the smallest perturbation that can cause misclassification while maintaining the perceptual similarity to the original input. The C&W attack employs an iterative optimization process to minimize the perturbation while maximizing the model’s prediction error.
Another optimization-based approach is the Elastic Net Attack to DNNs (EAD), which incorporates elastic net regularization to generate adversarial examples with sparse perturbations.
Transfer-based Attacks
Transfer-based attacks exploit the transferability property of adversarial examples. Transferability refers to the phenomenon where adversarial examples crafted for one ML model can often fool other models, even if they have different architectures or were trained on different datasets. This enables attackers to generate adversarial examples using a surrogate model and then transfer them to the target model without requiring direct access to its parameters or gradients. Transfer-based attacks highlight the generalization of adversarial vulnerabilities across different models and the potential for black-box attacks.
@@ -1549,9 +1549,9 @@Modifying training data labels: One of the most straightforward mechanisms of data poisoning is modifying the training data labels. In this approach, the attacker selectively changes the labels of a subset of the training samples to mislead the model’s learning process as shown in Figure 17.23. For example, in a binary classification task, the attacker might flip the labels of some positive samples to negative, or vice versa. By introducing such label noise, the attacker aims to degrade the model’s performance or cause it to make incorrect predictions for specific target instances.
+Modifying training data labels: One of the most straightforward mechanisms of data poisoning is modifying the training data labels. In this approach, the attacker selectively changes the labels of a subset of the training samples to mislead the model’s learning process as shown in Figure 17.23. For example, in a binary classification task, the attacker might flip the labels of some positive samples to negative, or vice versa. By introducing such label noise, the attacker degrades the model’s performance or cause it to make incorrect predictions for specific target instances.
Altering feature values in training data: Another mechanism of data poisoning involves altering the feature values of the training samples without modifying the labels. The attacker carefully crafts the feature values to introduce specific biases or vulnerabilities into the model. For instance, in an image classification task, the attacker might add imperceptible perturbations to a subset of images, causing the model to learn a particular pattern or association. This type of poisoning can create backdoors or trojans in the trained model, which specific input patterns can trigger.
-Injecting carefully crafted malicious samples: In this mechanism, the attacker creates malicious samples designed to poison the model. These samples are crafted to have a specific impact on the model’s behavior while blending in with the legitimate training data. The attacker might use techniques such as adversarial perturbations or data synthesis to generate poisoned samples that are difficult to detect. The attacker aims to manipulate the model’s decision boundaries by injecting these malicious samples into the training data or introducing targeted misclassifications.
+Injecting carefully crafted malicious samples: In this mechanism, the attacker creates malicious samples designed to poison the model. These samples are crafted to have a specific impact on the model’s behavior while blending in with the legitimate training data. The attacker might use techniques such as adversarial perturbations or data synthesis to generate poisoned samples that are difficult to detect. The attacker manipulates the model’s decision boundaries by injecting these malicious samples into the training data or introducing targeted misclassifications.
Exploiting data collection and preprocessing vulnerabilities: Data poisoning attacks can also exploit the data collection and preprocessing pipeline vulnerabilities. If the data collection process is not secure or there are weaknesses in the data preprocessing steps, an attacker can manipulate the data before it reaches the training phase. For example, if data is collected from untrusted sources or issues in data cleaning or aggregation, an attacker can introduce poisoned samples or manipulate the data to their advantage.
Manipulating data at the source (e.g., sensor data): In some cases, attackers can manipulate the data at its source, such as sensor data or input devices. By tampering with the sensors or manipulating the environment in which data is collected, attackers can introduce poisoned samples or bias the data distribution. For instance, in a self-driving car scenario, an attacker might manipulate the sensors or the environment to feed misleading information into the training data, compromising the model’s ability to make safe and reliable decisions.
While Google has made measurable progress in restraining the carbon footprint of its AI operations, the company recognizes further efficiency gains will be vital for responsible innovation given the technology’s ongoing expansion.
One area of focus is showing how advances are often incorrectly viewed as increasing unsustainable computing—like neural architecture search (NAS) to find optimized models— spur downstream savings, outweighing their upfront costs. Despite expending more energy on model discovery rather than hand-engineering, NAS cuts lifetime emissions by producing efficient designs callable across countless applications.
-Additionally, the analysis reveals that focusing sustainability efforts on data center and server-side optimization makes sense, given the dominant energy draw versus consumer devices. Though Google aims to shrink inference impacts across processors like mobile phones, priority rests on improving training cycles and data center renewables procurement for maximal effect.
+Additionally, the analysis reveals that focusing sustainability efforts on data center and server-side optimization makes sense, given the dominant energy draw versus consumer devices. Though Google shrinks inference impacts across processors like mobile phones, priority rests on improving training cycles and data center renewables procurement for maximal effect.
To that end, Google’s progress in pooling computing inefficiently designed cloud facilities highlights the value of scale and centralization. As more workloads shift away from inefficient on-premise servers, internet giants’ prioritization of renewable energy—with Google and Facebook matched 100% by renewables since 2017 and 2020, respectively—unlocks compounding emissions cuts.
Together, these efforts emphasize that while no resting on laurels is possible, Google’s multipronged approach shows that AI efficiency improvements are only accelerating. Cross-domain initiatives around lifecycle assessment, carbon-conscious development patterns, transparency, and matching rising AI demand with clean electricity supply pave a path toward bending the curve further as adoption grows. The company’s results compel the broader field towards replicating these integrated sustainability pursuits.
@@ -1428,7 +1428,7 @@Despite these promising directions, several challenges need to be addressed. One of the major challenges is the need for consistent standards and methodologies for measuring and reporting the environmental impact of AI. These methods must capture the complexity of the life cycles of AI models and system hardware. Next, efficient and environmentally sustainable AI infrastructure and system hardware are needed. This consists of three components. It aims to maximize the utilization of accelerator and system resources, prolong the lifetime of AI infrastructure, and design systems hardware with environmental impact in mind.
+Despite these promising directions, several challenges need to be addressed. One of the major challenges is the need for consistent standards and methodologies for measuring and reporting the environmental impact of AI. These methods must capture the complexity of the life cycles of AI models and system hardware. Next, efficient and environmentally sustainable AI infrastructure and system hardware are needed. This consists of three components. It maximizes the utilization of accelerator and system resources, prolong the lifetime of AI infrastructure, and design systems hardware with environmental impact in mind.
On the software side, we should trade off experimentation and the subsequent training cost. Techniques such as neural architecture search and hyperparameter optimization can be used for design space exploration. However, these are often very resource-intensive. Efficient experimentation can significantly reduce the environmental footprint overhead. Next, methods to reduce wasted training efforts should be explored.
To improve model quality, we often scale the dataset. However, the increased system resources required for data storage and ingestion caused by this scaling have a significant environmental impact (Wu et al. 2022). A thorough understanding of the rate at which data loses its predictive value and devising data sampling strategies is important.
Training is critical for developing accurate and useful AI systems using machine learning. The training aims to create a machine learning model that can generalize to new, unseen data rather than memorizing the training examples. This is done by feeding training data into algorithms that learn patterns from these examples by adjusting internal parameters.
+Training is critical for developing accurate and useful AI systems using machine learning. The training creates a machine learning model that can generalize to new, unseen data rather than memorizing the training examples. This is done by feeding training data into algorithms that learn patterns from these examples by adjusting internal parameters.
The algorithms minimize a loss function, which compares their predictions on the training data to the known labels or solutions, guiding the learning. Effective training often requires high-quality, representative data sets large enough to capture variability in real-world use cases.
It also requires choosing an algorithm suited to the task, whether a neural network for computer vision, a reinforcement learning algorithm for robotic control, or a tree-based method for categorical prediction. Careful tuning is needed for the model structure, such as neural network depth and width, and learning parameters like step size and regularization strength.
Techniques to prevent overfitting like regularization penalties and validation with held-out data, are also important. Overfitting can occur when a model fits the training data too closely, failing to generalize to new data. This can happen if the model is too complex or trained too long.
@@ -1564,10 +1564,10 @@Several commercial auto-tuning platforms are available to address this problem. One solution is Google’s Vertex AI Cloud, which has extensive integrated support for state-of-the-art tuning techniques.
-One of the most salient capabilities of Google’s Vertex AI-managed machine learning platform is efficient, integrated hyperparameter tuning for model development. Successfully training performant ML models requires identifying optimal configurations for a set of external hyperparameters that dictate model behavior, posing a challenging high-dimensional search problem. Vertex AI aims to simplify this through Automated Machine Learning (AutoML) tooling.
+One of the most salient capabilities of Google’s Vertex AI-managed machine learning platform is efficient, integrated hyperparameter tuning for model development. Successfully training performant ML models requires identifying optimal configurations for a set of external hyperparameters that dictate model behavior, posing a challenging high-dimensional search problem. Vertex AI simplifies this through Automated Machine Learning (AutoML) tooling.
Specifically, data scientists can leverage Vertex AI’s hyperparameter tuning engines by providing a labeled dataset and choosing a model type such as a Neural Network or Random Forest classifier. Vertex launches a Hyperparameter Search job transparently on the backend, fully handling resource provisioning, model training, metric tracking, and result analysis automatically using advanced optimization algorithms.
Under the hood, Vertex AutoML employs various search strategies to intelligently explore the most promising hyperparameter configurations based on previous evaluation results. Among these, Bayesian Optimization is offered as it provides superior sample efficiency, requiring fewer training iterations to achieve optimized model quality compared to standard Grid Search or Random Search methods. For more complex neural architecture search spaces, Vertex AutoML utilizes Population-Based Training, which simultaneously trains multiple models and dynamically adjusts their hyperparameters by leveraging the performance of other models in the population, analogous to natural selection principles.
-Vertex AI aims to democratize state-of-the-art hyperparameter search techniques at the cloud scale for all ML developers, abstracting away the underlying orchestration and execution complexity. Users focus solely on their dataset, model requirements, and accuracy goals, while Vertex manages the tuning cycle, resource allocation, model training, accuracy tracking, and artifact storage under the hood. The result is getting deployment-ready, optimized ML models faster for the target problem.
+Vertex AI democratizes state-of-the-art hyperparameter search techniques at the cloud scale for all ML developers, abstracting away the underlying orchestration and execution complexity. Users focus solely on their dataset, model requirements, and accuracy goals, while Vertex manages the tuning cycle, resource allocation, model training, accuracy tracking, and artifact storage under the hood. The result is getting deployment-ready, optimized ML models faster for the target problem.
\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20Related\x0a\x20\x20\x20\x20\x20\x20
',_0x557b5c['forEach']((_0x2fa32a,_0x1ceed6)=>{const _0x452bee=_0x277b91;if(0x0!==_0x1ceed6){const _0x2c7fef=document[_0x452bee(0x23d7)]('hr');_0x56f230[_0x452bee(0x1952)](_0x2c7fef);}const _0x252195=document[_0x452bee(0x23d7)](_0x452bee(0x1de1));_0x252195[_0x452bee(0x22eb)]='+\x20'+function(_0x30ebe1){const _0x48b27a=_0x452bee;let _0x22be94=_0x30ebe1['replace'](/^\d+\.\s*/,'');return _0x22be94=_0x22be94[_0x48b27a(0x1db7)](/\*/g,''),_0x22be94;}(_0x2fa32a),_0x252195[_0x452bee(0x3098)][_0x452bee(0xbf0)]=_0x452bee(0x50b2),_0x252195[_0x452bee(0x3098)][_0x452bee(0x4604)]=_0x452bee(0x80b),_0x252195['classList']['add']('followup-button'),_0x252195['style']['display']=_0x452bee(0x3eab),_0x252195[_0x452bee(0x3098)][_0x452bee(0x2fe9)]='5px',_0x252195[_0x452bee(0x3098)][_0x452bee(0x378f)]=_0x452bee(0x341f),_0x252195[_0x452bee(0x3098)][_0x452bee(0x2cd)]=_0x452bee(0x24d0),_0x252195[_0x452bee(0x20d0)]('click',function(_0x261395){const _0x3efef4=_0x452bee;!function(_0x39ff43,_0x5af332=[window[_0x3efef4(0x94a)]['href']]){const _0x1a0a70=_0x3efef4,_0x1f68b3=new CustomEvent('aiActionCompleted',{'detail':{'text':_0x39ff43,'type':_0x1a0a70(0x2373),'links':_0x5af332}});window[_0x1a0a70(0x3ad1)](_0x1f68b3);}(_0x3efef4(0x462a)+_0x2fa32a+_0x3efef4(0x2d1d)+_0x5e9c79);}),_0x56f230[_0x452bee(0x1952)](_0x252195);}),_0x39425e[_0x277b91(0x1952)](_0x56f230);}}_0x2a66b8[_0x235866(0x17b9)]['rules']['text']=(_0xfbea0c,_0x2a8fc5,_0x416cd0,_0x24d59d,_0x1cd4f6)=>{const _0x404e57=_0x235866,_0x3a6cbc=_0xfbea0c[_0x2a8fc5][_0x404e57(0x498f)],_0x3dd446=[..._0x3a6cbc['matchAll'](/\\(.*?)\\/g)];let _0x46bbc4=_0x3a6cbc;return _0x3dd446[_0x404e57(0x4854)](_0xf57993=>{const _0x46f8fc=_0x404e57,_0x931e13=_0xf57993[0x1],_0x501ae1=_0x46f8fc(0x4032)+(_0x46f8fc(0x19df)+encodeURIComponent(_0x931e13))+'\x22\x20target=\x22_blank\x22\x20style=\x22color:\x20blue;\x20text-decoration:\x20underline;\x22>'+_0x931e13+_0x46f8fc(0x4d0e);_0x46bbc4=_0x46bbc4[_0x46f8fc(0x1db7)]('\x5c'+_0x931e13+'\x5c',_0x501ae1);}),_0x46bbc4;},_0x2a66b8[_0x235866(0x17b9)][_0x235866(0x28bb)][_0x235866(0xa22)]=(_0x404a51,_0xd136b8)=>{const _0x3ccec6=_0x235866,_0x3afd90=_0x404a51[_0xd136b8][_0x3ccec6(0x17a2)];return'<'+_0x3afd90+_0x3ccec6(0x1c70)+{'h1':_0x3ccec6(0x478c),'h2':_0x3ccec6(0x3e6a),'h3':'text-xl\x20font-semibold\x20mt-2','h4':'text-lg\x20font-semibold\x20mt-2','h5':'text-base\x20font-medium\x20mt-2','h6':_0x3ccec6(0x3ea4)}[_0x3afd90]+'\x22>';},_0x2a66b8['renderer'][_0x235866(0x28bb)][_0x235866(0xbaa)]=(_0x3dce59,_0x1a13c0)=>''+_0x3dce59[_0x1a13c0][_0x235866(0x17a2)]+'>',_0x2a66b8[_0x235866(0x17b9)]['rules'][_0x235866(0x3edd)]=()=>_0x235866(0x2bfe),_0x2a66b8[_0x235866(0x17b9)][_0x235866(0x28bb)][_0x235866(0x3248)]=()=>_0x235866(0x2f6),_0x2a66b8['renderer'][_0x235866(0x28bb)]['list_item_open']=()=>'','effect','peek','cacheDoc','kqueue','satir','EntityProperty','Annotation','LanguageData','celldefine','current_user','draw_enable_alphablend','vk_alt','AbstractFloat','BoundaryMeshRegionQ','his-fine','Conditioned','Colorize','huge','percent_rank','HKEY_CURRENT_USER','HEBREW_CHARSET','vk_control','diag_enable','sessionStorage','`[cwd]?','column-rule-style','Form','AllowVersionUpdate','Subsequences','stata','ReadEnvStr','bunked','nearObjects','DirectionalLight','vertex_texcoord','CreateDialog','invalidOp','ds_list_find_value','gpu_set_tex_filter_ext','rapids','SphereBox','|case|contractions|parentheses|quotations|emoji|honorifics|debullet','PrincipalValue','src_ZPCc','InString','ctrlSetFontPB','nounconnected_drive','GraphDensity','text_join','ISERROR','md5_file','^[(well|so|okay|now)]\x20!#Adjective?','²','forceSpeed','casex','ExpIntegralE','completedFSM','GeoVectorXYZ','CopyFiles','os_win32','gamepad_button_check_pressed','diag_frameno','deref','RightUpDownVector','program_directory','achievement_show_profile','toUpper','tokens_meta','PreviousDate','bquote','(supposing|although)',')|r)?i?\x5cb','lapply','ToEntity','ImageLevels','VBG','Timer','forests','fn\x20function','MatchQ','StringDrop','inspect','gesture_get_rotate_angle','timeunit','UndirectedGraph','sha224sum','ev_user10','nearest','add3DENEventHandler','Conjunction\x20Adjective\x20Noun','xhtmlOut','TeXForm','LineIndent','overcastForecast','cmpfunc_lessequal','vertex_format_add_colour','loadFile','true¦director1field\x20marsh2lieutenant1rear0sergeant\x20major,vice0;\x20admir1;\x20gener0;al','NotSquareSubset','toboolean','showUAVFeed','u32','PERCENTILE','surface_depth_disable','jan','[;@]','DualSystemsModel','tcl_wordBreakBefore','datetime','ParameterEstimator','⦐','SatisfiabilityCount','PDF','#Actor','genera','build','kbv_returnkey_emergency','@hasContraction','⊋','physics_particle_group_begin','PaperWidth','TogglerBox','isAbleToBreathe','PetersenGraph','plains','design','unpack','BetweennessCentrality','PowerExpand','%[Qwi]?\x5c{','ProbabilityPr','worldName','eal','ParseError','Blur','VARPTR','withStream','ReadINIStr','Bearing','TaskRemove','writeMessage','setWaypointForceBehaviour','%Person|Date%\x20#Acronym?\x20#ProperNoun','Paneled','setViewDistance','class\x20interface','BigFloat','getAllEnvSoundControllers','WebElementObject','audio_falloff_exponent_distance','$ConditionHold','discrR','num_elements','enginesPowerRTD','explains_wrapper','DeleteMissing','log_diff_exp',')|\x5c.)?|(','as\x20#Pronoun\x20[please]','BipartiteGraphQ','NS_AVAILABLE','xquery','HarmonicMeanFilter','MeanDeviation','trigger','Mizar','isLightOn','ImagingDevice','any\x20[#Infinitive]','regr_sxy','(health|school|commerce)\x20board','msgcat','magazineCargo','phy_joint_upper_angle_limit','ev_user5','QuotientRemainder','endl','primaryWeaponMagazine','FieldMasked','our','Hexahedron','log1p','^C$','vertex_float1','physics_particle_delete_region_poly','backticksScanned','pascal','localtime','selectOutput','(i|we|they)\x20have','openNextFile','FormulaLookup','markerAlpha','removeMagazineTurret','endcelldefine','FunctionContinuous','gfail','thing-doer','GeometricTransformation','#filePath','c_null_ptr','FunctionCompileExportLibrary','pre\x20code.hljs\x20{\x0a\x20\x20display:\x20block;\x0a\x20\x20overflow-x:\x20auto;\x0a\x20\x20padding:\x201em\x0a}\x0acode.hljs\x20{\x0a\x20\x20padding:\x203px\x205px\x0a}\x0a/*!\x0a\x20\x20Theme:\x20GitHub\x0a\x20\x20Description:\x20Light\x20theme\x20as\x20seen\x20on\x20github.com\x0a\x20\x20Author:\x20github.com\x0a\x20\x20Maintainer:\x20@Hirse\x0a\x20\x20Updated:\x202021-05-15\x0a\x0a\x20\x20Outdated\x20base\x20version:\x20https://github.com/primer/github-syntax-light\x0a\x20\x20Current\x20colors\x20taken\x20from\x20GitHub\x27s\x20CSS\x0a*/\x0a.hljs\x20{\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20background:\x20#ffffff\x0a}\x0a.hljs-doctag,\x0a.hljs-keyword,\x0a.hljs-meta\x20.hljs-keyword,\x0a.hljs-template-tag,\x0a.hljs-template-variable,\x0a.hljs-type,\x0a.hljs-variable.language_\x20{\x0a\x20\x20/*\x20prettylights-syntax-keyword\x20*/\x0a\x20\x20color:\x20#d73a49\x0a}\x0a.hljs-title,\x0a.hljs-title.class_,\x0a.hljs-title.class_.inherited__,\x0a.hljs-title.function_\x20{\x0a\x20\x20/*\x20prettylights-syntax-entity\x20*/\x0a\x20\x20color:\x20#6f42c1\x0a}\x0a.hljs-attr,\x0a.hljs-attribute,\x0a.hljs-literal,\x0a.hljs-meta,\x0a.hljs-number,\x0a.hljs-operator,\x0a.hljs-variable,\x0a.hljs-selector-attr,\x0a.hljs-selector-class,\x0a.hljs-selector-id\x20{\x0a\x20\x20/*\x20prettylights-syntax-constant\x20*/\x0a\x20\x20color:\x20#005cc5\x0a}\x0a.hljs-regexp,\x0a.hljs-string,\x0a.hljs-meta\x20.hljs-string\x20{\x0a\x20\x20/*\x20prettylights-syntax-string\x20*/\x0a\x20\x20color:\x20#032f62\x0a}\x0a.hljs-built_in,\x0a.hljs-symbol\x20{\x0a\x20\x20/*\x20prettylights-syntax-variable\x20*/\x0a\x20\x20color:\x20#e36209\x0a}\x0a.hljs-comment,\x0a.hljs-code,\x0a.hljs-formula\x20{\x0a\x20\x20/*\x20prettylights-syntax-comment\x20*/\x0a\x20\x20color:\x20#6a737d\x0a}\x0a.hljs-name,\x0a.hljs-quote,\x0a.hljs-selector-tag,\x0a.hljs-selector-pseudo\x20{\x0a\x20\x20/*\x20prettylights-syntax-entity-tag\x20*/\x0a\x20\x20color:\x20#22863a\x0a}\x0a.hljs-subst\x20{\x0a\x20\x20/*\x20prettylights-syntax-storage-modifier-import\x20*/\x0a\x20\x20color:\x20#24292e\x0a}\x0a.hljs-section\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-heading\x20*/\x0a\x20\x20color:\x20#005cc5;\x0a\x20\x20font-weight:\x20bold\x0a}\x0a.hljs-bullet\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-list\x20*/\x0a\x20\x20color:\x20#735c0f\x0a}\x0a.hljs-emphasis\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-italic\x20*/\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20font-style:\x20italic\x0a}\x0a.hljs-strong\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-bold\x20*/\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20font-weight:\x20bold\x0a}\x0a.hljs-addition\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-inserted\x20*/\x0a\x20\x20color:\x20#22863a;\x0a\x20\x20background-color:\x20#f0fff4\x0a}\x0a.hljs-deletion\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-deleted\x20*/\x0a\x20\x20color:\x20#b31d28;\x0a\x20\x20background-color:\x20#ffeef0\x0a}\x0a.hljs-char.escape_,\x0a.hljs-link,\x0a.hljs-params,\x0a.hljs-property,\x0a.hljs-punctuation,\x0a.hljs-tag\x20{\x0a\x20\x20/*\x20purposely\x20ignored\x20*/\x0a\x20\x20\x0a}','running','≚','PRP$','TuringMachine','Saturday','assignedCargo','abbreviations','exponent','GaussianSymplecticMatrixDistribution','aget','nouns','tile_get_empty','timeprecision','scroll-margin-top','(?:<','one','ImageRangeCache','',';[\x20\x5ct]*#','Touches','GaborFilter','ev_user2','endRaw','move3DENCamera','ERB','^do\x20not?\x20[#Infinitive\x20#Particle?]','UTF8','java','border-2','SelectedCells','TreeChildren','vehicleChat','(=(?!>))?|[-+*/%](?!>)','\x5cb0b([01_]+)','sysfunc','$1ae','uname','vectored','ActiveStyle','readBytesUntil','AnatomySkinStyle','IMSIN','push','endScope\x20must\x20be\x20object','NebulaData','[','FunctionSpace','AudioOverlay','camera_set_view_pos','SamplingPeriod','authors','GeoPositionXYZ','ren','õ','curatorCameraArea','HTML','csr_extract_u','getEnv3DSoundController','CHIINV','beforeTags','http..','university-of-Foo','show_question','MoleculeModify','CepstrumArray','HoldPattern','CounterStyle','mountain','ACOT','layer_get_script_end','ugc_match_IntegratedGuides','AutoScaling','audio_falloff_set_model','CellContext','⊻','namespace-node','NonlinearStateSpaceModel','token','date_hour_span','text/xml','path','modelToWorldVisual','gpu_get_zfunc','NumberMultiplier','FoldList','factory','keydef','ActionDelay','batch','SampleDepth','hurried','$countbits','not','est','Scrollbars','part_type_colour3','⊿','audio_new_system','a-bit-confused','would','ordering','$q_add','isFormationLeader','alarm_get','forceGeneratorRTD','⋹̸','DefaultMenuStyle','moonIntensity','some-sort-of','TreeMap','buffer_s32','regs','Quantity','border-top-width','^of\x20.','frozenLex','vehicleMoveInfo','ExternalIdentifier','PairedHistogram','schema-attribute','Initial','RecalibrationFunction','languageDetectRe','BlackmanWindow','LongEqual','InstTypeGetText','setTrafficDensity','getAttribute','generate','NotebookWrite','VideoExtractFrames','vars','camPrepareFov','NotLeftTriangleBar','ParallelArray','layer_destroy','ceiling','LightBlue','object_index','achievement_login','$dumpvars','$ProcessorCount','clog','ibessel','comparatives','lsr','buffer_wrap','q[qwxr]?\x5cs*<','KeyFreeQ','MultivariateHypergeometricDistribution','audio_emitter_free','ctrlAutoScrollSpeed','prefers-contrast','steam_set_stat_avg_rate','enableAttack','semctl','AssociateTo','audio_stop_sound','cutObj','MenuSuite','n1ql','multiRegexes','BINOMDIST','setFromEditor','⋷','Normalize','%[0-9]+','os_get_info','$Off','classpath','ApplyTo','will-adj','datan','targetsAggregate','[#Cardinal+\x20#Ordinal]\x20of\x20.','deleteMarkerLocal','setStatValue','exists','(so|very|extremely)\x20[#Gerund]','TextWords','FindMaximumFlow','RightDownVectorBar','TANH','^\x5cs*#\x5cw+','concat','AsynchronousTaskObject','RemoteKernelObject','image_alpha','MaxTrainingRounds','BitOr','ous','CellEventActions','caption_lives','Tomorrow','HeatTransferValue','⇥','FromCharacterCode','isAutoTrimOnRTD','BitLength','Returning\x20results','specify','phy_joint_reaction_force_x','HTMLInjectionError','parseInline','(first|second|third|1st|2nd|3rd)\x20#Actor','PrimitiveRoot','river','SphericalHankelH2','$1zes','┐','≜','ReplicateLayer','Median','Flat','@[^@\x5cs]+','covergroup','notif0','achievement_post','AutoNumberFormatting','getEnvSoundController','skeleton_animation_get_ext','lightSpecular','Into','std_normal_log','BinomialPointProcess','answers','⩼','ComplexInfinity','Now','scss','$MessageList','ℨ','UnitStep','AbsoluteCurrentValue','(url|data-uri)\x5c(','Verb','MandelbrotSetDistance','allowDammage','⇀','TextElement','DebugStop','(\x5cs*\x5c(.*?\x5c))?[;{]','Kurtosis','IdDict','timeline_size','encodeURIComponent','PageWidth','PMT','stringify!','Illegal','darcsin','MassImpermeableBoundaryValue','≤⃒','HTAB','line_color','#Infinitive\x20#Pronoun\x20[like]','removeAllUserActionEventHandlers','Assuming','mapping','UpValues','narrows','CUMPRINC','#Noun\x20[that]\x20#Verb\x20#Adjective','missing','LineSpacing','%r\x5c(','platform','MonomialOrder','diag_toggle','NoTrayIcon','%Person|Verb%\x20#Acronym?\x20#ProperNoun','HornerForm','Modal','ListStreamDensityPlot','xs:unsignedShort','BACKSLASH_ESCAPE','EntityTypeName','mouseDragged','transition-duration','font-display','BraKet','emissive','addr','AudioPan','registerTask','⊡','early','pushBackUnique','NotebookInformation','#Value\x20[(buck|bucks|grand)]','TracyWidomDistribution','consuming','shebang','LongestCommonSequencePositions','PacletDirectoryAdd','vk_f6','LeftArrow','velocity','Manipulate','json_array','ps_distr_linear','parseHost','draw_surface_general','Offset','HankelMatrix','soldierMagazines','com1','WordTranslation','module\x20use_module\x20import_module\x20include_module\x20end_module\x20initialise\x20mutable\x20initialize\x20finalize\x20finalise\x20interface\x20implementation\x20pred\x20mode\x20func\x20type\x20inst\x20solver\x20any_pred\x20any_func\x20is\x20semidet\x20det\x20nondet\x20multi\x20erroneous\x20failure\x20cc_nondet\x20cc_multi\x20typeclass\x20instance\x20where\x20pragma\x20promise\x20external\x20trace\x20atomic\x20or_else\x20require_complete_switch\x20require_det\x20require_semidet\x20require_multi\x20require_nondet\x20require_cc_multi\x20require_cc_nondet\x20require_erroneous\x20require_failure','sin','Scope','ctrlIDD','person','to_underlying','isGamePaused','Application','power','c_f_procpointer','$dumpfile','#Value\x20[#PresentTense]\x20of','bindattr','RemoteBatchJobObject','matrix_stack_pop','catch','⁣','base64_encode','Boole','getPosATL','NetFlatten','#LastName+','escapeHtml','(looked|look|looks)\x20#Adverb?\x20[%Adj|Gerund%]','external_define','complex_matrix','yoo','flagAnimationPhase','\x5cs*(?:=|:=)\x5cs*)?(\x5c(.*\x5c)\x5cs*)?\x5cB!?[-~]{1,2}>\x5c*?','will\x20[be]\x20#PastTense','tvOS','DiggleGatesPointProcess','setWPPos','function.dispatch','bessel_y0','ds_priority_empty','EQV','triggerType','ACOS','newOutputStream','merge_bits','__TRAIT__','cr_cross','OCaml','UnitRootTest','part_emitter_create','⩕','RotationMatrix','𝓌','⤽','display_reset','ctrlSetTextSelection','#Noun\x20of\x20#Determiner?\x20#Noun','@import\x20url(https://fonts.googleapis.com/css2?family=Nunito:wght@400;800&display=swap);','OFFSET','$AllowDataUpdates','CapturedException','TimeSeriesForecast','iso\x20val\x20tag\x20trn\x20box\x20ref','DIGIT','if\x20else\x20elif\x20endif\x20define\x20undef\x20warning\x20error\x20line\x20pragma\x20_Pragma\x20ifdef\x20ifndef\x20include','[\x20\x5ct\x5cn\x5cr]','VarianceGammaPointProcess','delay_mode_path','(associated|worn|baked|aged|armed|bound|fried|loaded|mixed|packed|pumped|filled|sealed)','Cfloat','step','Bool','inv_inc_beta','NetGraph','sprite_merge','wire','DualPlanarGraph','out','radio','CheckboxBoxOptions','AudioIdentify','PrimitiveRootList','php-template','draw_set_font','OverVector','⨯','Extern','slice','facebook_send_invite','#Adjective\x20#Infinitive','Inline','SubtypeCode','StyleMenuListing','FourierDCTFilter','browser_edge','grid-column','ComplexityFunction','which\x20[%Noun|Verb%]\x20#Noun','σ','stdout','path_rescale','RandomPolygon','Are\x20you\x20sure\x20you\x20want\x20to\x20clear\x20all\x20of\x20your\x20chats?\x20This\x20action\x20cannot\x20be\x20undone.','Handlebars','fa_bottom','img','⪏','ListDensityPlot3D','%(ROW)?TYPE','#more-papers-search-button-','IndependentVertexSetQ','£','que','constraint','$dist_t','Õ','AxisObject','resistance','matrix_build','qconjg','KeyValuePattern','capt','𝕕','ReplacePixelValue','quantile','lzma','AnatomyForm','fromGerund','rob-a-smith','font_texture_page_size','(will|be)','FORECAST.ETS.STAT','#Verb\x20.*\x20[but]\x20.*\x20#Verb','directory_exists','Perimeter','BaselinePosition','Before','enddo','CompositeException','animateDoor','AudioPitchShift','src_host_port_strict','ErrorPacket','CSS','℘','^[#Infinitive]\x20(#Adjective|#Adverb)$','addWeaponItem','realloc','audio_play_sound','draw_primitive_begin_texture','gardens','Floor','$MinPrecision','MannWhitneyTest','ev_joystick1_button3','drawLaser','MovieCredits','$GeneratedAssetLocation','setRepairCargo','createMPCampaignDisplay','⥊','ds_priority_delete_max','id-','CountDistinct','window_has_focus','posinfif','&?:(:)?(','path_clear_points','ℵ','TWOPI','punctuation','LeftTeeVector','anything','grid-row-end','pharmacare','textBlock','NumericArray','[#Gerund]\x20#Adverb?\x20not?\x20#Copula','FractionalPart','SpellingDictionariesPath','penna','class','InitializationObject','GaussianFilter','Highlighted','digits','∪︀','airways','endpackage','displayLogos','Quiet','accTime','Image3D','binary_log_loss','⫰','aiActionCompleted','die','0|[1-9](_?[0-9])*|0[0-7]*[89][0-9]*','camera_get_view_border_x','gamepad_is_connected','Τ','yml','⌎','PCOPY','NuclearReactorData','\x20is\x20updated','c_size_t','COMBINA','PopupMenu','SystemProcessData',' ','mexican-train','$dumpportslimit','GraphUnion','JacobiSymbol','Error\x20in\x20getting\x20row:\x20','pushln','^[=\x5c*]{4,}\x5cn','os_ios','InverseSurvivalFunction','backface-visibility','GeoWithinQ','MB_RIGHT','#Infinitive\x20(this|that|the)\x20[#Infinitive]','turretLocal','MB_DEFBUTTON3','≩︀','vehicleReportRemoteTargets','conjugate','cuchar','^\x5cs*%{1,2}={0,2}','URLSubmit','faith-based','CONDITION','setWaypointPosition','⏞','AskDisplay','noSmooth','network_create_socket_ext','format!','cmpfunc_never','\x5cb(state|default)\x5cb','shortint','𝔚','declval','MultiLetterItalics','voice-duration','∅','Actor','awake','Checking\x20conversation\x20history','chop','pus','loadShape','WebSearch','terrainIntersectASL','bit_vector','RudvalisGroupRu','particlesQuality','\x0aGiven\x20a\x20piece\x20of\x20text\x20and\x20a\x20partial\x20query\x20as\x20input:\x20<','ExternalStorageBase','GridFrame','\x5cs*(?=[:+?]?=)','setSlingLoad','ctrlSetFontHeight','steam_file_write_file','native','CDBL','allPlayers','Ô','alphabetical','Unsupported\x20config\x20type:\x20','bessel_y1','⇃','ATANH','vertex_float3','sept','selector-id','DiscreteIndicator','IDENTIFIER\x20OPTIONS\x20XML_ELEMENT\x20XML_OP\x20XML_ELEMENT_OF\x20DOMDOCCREATE\x20DOMDOCLOADFILE\x20DOMDOCLOADXML\x20DOMDOCSAVEFILE\x20DOMDOCGETROOT\x20DOMDOCADDPI\x20DOMNODEGETNAME\x20DOMNODEGETTYPE\x20DOMNODEGETVALUE\x20DOMNODEGETCHILDCT\x20DOMNODEGETFIRSTCHILD\x20DOMNODEGETSIBLING\x20DOMNODECREATECHILDELEMENT\x20DOMNODESETATTRIBUTE\x20DOMNODEGETCHILDELEMENTCT\x20DOMNODEGETFIRSTCHILDELEMENT\x20DOMNODEGETSIBLINGELEMENT\x20DOMNODEGETATTRIBUTECT\x20DOMNODEGETATTRIBUTEI\x20DOMNODEGETATTRIBUTEBYNAME\x20DOMNODEGETBYNAME','lle','$CloudSymbolBase','move_random','^(?:','Packet\x20Filter\x20config','$Linked','GREEK_CHARSET','FileNameSetter','$PathnameSeparator','RelationalDatabase','org-possessive','removeAllWeapons','mask-origin','
','fortran','HandlerFunctionsKeys','ABORT','NegativelyOrientedPoints','InverseJacobiSC','FileReadByte','DataRange','ProcessStateDomain','minWant','BackTo','SpheroidalEigenvalue','ado','rantbl','left-verb','lindex|10',';|:','numbers','BASE','self::','ß','mask-border-source','tcl_findLibrary','http_get','FillingTransform','ScalingFunctions','addMagazine','#(o|O)[0-7]+(/[0-7]+)?','contractionTwo','config','worldToScreen','URLFetch','(ATAN|ABS|ACOS|ASIN|SIN|COS|EXP|FIX|FUP|ROUND|LN|TAN)(\x5c[)','box','gpu_get_tex_max_aniso','min-inline-size','PolyaAeppliDistribution','ExpressionPacket','addBackpackCargo','userNameRead','arcos','graftabl','downto','setFuel','#Adjective','make_pair','Ä','clock','EdgeContract','WORKDAY.INTL','⫽','ExponentialFamily','(\x5c$\x5cW)|((\x5c$|@@?)(\x5cw+))(?=[^@$?])(?![A-Za-z])(?![@$?\x27])','RETURNS','xcopy','registerLanguage','lockedDriver','language-','ever\x20since','\x1b[0m','FractionalBrownianMotionProcess','is_vec4','that-are','Center','LanguageIdentify','WikipediaSearch','ReverseEquilibrium','returning','DateObject','#Money','path_get_length','(got|were|was)\x20#Passive','ropeAttachEnabled','tvSetPicture','BiweightLocation','hundred','ctrlSetURL','Sunrise','FindVertexCover','¸','Objective-C','man','getRoadInfo','societe','Feature\x20Background\x20Ability\x20Business\x20Need\x20Scenario\x20Scenarios\x20Scenario\x20Outline\x20Scenario\x20Template\x20Examples\x20Given\x20And\x20Then\x20But\x20When','Joined','EdgeStyle','OPTIONS','findAny','matrix_stack_clear','StringDict','GroupOpenerInsideFrame','LinkConnectedQ','zcomp','textvalue-date','Ļ','ImagePerspectiveTransformation','getText','place_meeting','vectorModelToWorld','is_int32','Torus','GAMS','win8_livetile_queue_enable','FontName','cofix','pizza','SquareIntersection','setVelocityModelSpace','xs:language','comment','text-justify','GroebnerBasis','still-verb','Thinning','voice-stress','DATEDIF','setHitIndex','layer_sprite_get_yscale','hwy','distributionRegion','STRIG','Stdev','instantiate','Millisecond','llvm','AstronomicalData','Log','ContourShading','physics_test_overlap','safeZoneW','Error\x20retrieving\x20entry:\x20','loadStatus','StructuredArray','reflexivity','ImageMarker','ds_map_destroy','addItemCargoGlobal','lnbSize','LeftDownVector','draw_tile','∞','and','^[#Infinitive]\x20#Gerund','allDisplays','SARMAProcess','cks','musta','Continuation','\x5c})','gravity','DominantColors','help-stop','jsx','≽','VideoPadding','URLParse','setUserMFDValue','Tab','NMinValue','prompt','chkdsk','true¦began,came,d4had,kneel3l2m0sa4we1;ea0sg2;nt;eap0i0;ed;id','ImageCrop','LicenseLangString','access','setTaskResult','BaseDecode','RiskAchievementImportance','ev_other','audio_music_is_playing','chrw','Multiple','⊮','GOTO','clojure','#[\x5cw-]+','empty','Concatenate','AnimationRunTime','MEL','nonrec','DepthFirstScan','enableUAVWaypoints','SystemInstall','DrawFrontFaces','Complement','ParallelSubmit','TimeWarpingCorrespondence','WeibullDistribution','NevilleThetaC','draw_light_get_ambient','calloc','PillaiTrace','errordocument','BigInt','SubsuperscriptBoxOptions','ForwardBackward','steam_ugc_download','Deriv','⨪','commandFollow','setMagazineTurretAmmo','nsatz','trails','⪬','cards4','BooleanConsecutiveFunction','draw_skeleton','telecommunications','steam_get_app_id','isdigit','GreaterGreater','HorizontalForm','intrr','onReceive','IfAbort','IsomorphicSubgraphQ','↓','.False.\x20.True.','phy_particle_data_flag_color','ctSetValue','lnbSetData','eed','RoundNearestTiesUp','\x0a\x0a\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20Random\x20Text\x20Animation \x0a\x20\x20\x20\x20\x0a\x0a\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x0a\x0a\x0a','circle','RulesTree','chmod','CONFIG','$PrintForms','time','Tool_Offset','pron','PROG','toUpperCase','Ж','cbSetChecked','some\x20[#Verb]\x20#Plural','shopt','abs','ifnewer','LimitsPositioningTokens','turretUnit','pre-wrap','materialized','\x5cs*@?[\x5cw_][\x5cw_\x5c.:-]*','CircleBox','placeholder','Hypergeometric1F1Regularized','race','march','į','DoubleLongRightArrow','llm_model','setMarkerShadowLocal','cabs','FileBaseName','GaborWavelet','Ket','part_type_colour1','a\x20[must]','selectLeader','RectangleChart3D','who','(^|(?!_)(?:[><|]|','GaussianWindow','path_get_point_y','ParallelEvaluate','CanberraDistance','\x5cs*$','vbscript','CoefficientDomain','↥','SymmetrizedDependentComponents','componentChars','flyInHeightASL','r\x22\x22\x22','currency_symbol','img-src','firstname-firstname','Repeated','SquaredEuclideanDistance','([a-z\x5c-]+\x5c.)+\x5c*','JSONObject','⊗','prefixes','try!','bigint','buffer_get_alignment','Forward','Streams','machines','𝒷','Majority','#[A-Za-z0-9_-]+','camPrepareBank','LogicalExpand','loadGame','buffer_load_partial','EmbeddedService','OverDot','present','overlay-settings','setMimic','RotateRight','BackFaceGlowColor','setObjectTexture','DeviceReadTimeSeries','saveFrame','o\x27er','CUBEVALUE','⇛','set3DENSelected','GraphicsComplexBoxOptions','role','do\x20not\x20#Verb','unbound','ResourceVersion','military','radioChannelInfo','UninstallText','setGusts','cliffs','SupersetEqual','⅚','dirs','\x5cbegin','PolyhedronDecomposition','recursive_mutex','conjunctions','surface_copy_part','LocalObjects','\x22\x20class=\x22button-more-papers-','ChebyshevU','\x5cb(?:TODO|DONE|BEGIN|END|STUB|CHG|FIXME|NOTE|BUG|XXX)\x5cb','FrameBoxOptions','setRainbow','RemovalConditions','RootOfUnityQ','DeconvolutionLayer','NumberLinePlot','isEqualTo','ArrayFilter','$TracePattern','EXTERNAL','EventData','point_direction','Setoid','DateScale','^[A-Za-z0-9_.$]+:','DistributeDefinitions','AudioTrackSelection','layer_tile_get_x','Perpendicular','monument','fromSuperlative','actionscript','#Value+','Coq','EmbedCode','$1ives','keyboard_check_pressed','mkdir','NotebookDynamicExpression','will-name','elim','Rescale',':(?![\x5cs:])','isQuestion','MinimumTimeIncrement','GalaxyData','LogMultinormalDistribution','useAudioTimeForMoves','endconfig','AbstractSet','(#Preposition|#Pronoun|way)','Makefile','clearWeaponCargoGlobal','caisse','extension','ctrlIDC','PrintForm','\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20Lorem\x20ipsum\x20dolor\x20sit\x20amet,\x20consectetur\x20adipiscing\x20elit.\x20Sed\x20do\x20eiusmod\x20tempor\x20incididunt\x20ut\x20labore\x20et\x20dolore\x20magna\x20aliqua.\x20Ut\x20enim\x20ad\x20minim\x20veniam,\x20quis\x20nostrud\x20exercitation\x20ullamco\x20laboris\x20nisi\x20ut\x20aliquip\x20ex\x20ea\x20commodo\x20consequat.\x20Duis\x20aute\x20irure\x20dolor\x20in\x20reprehenderit\x20in\x20voluptate\x20velit\x20esse\x20cillum\x20dolore\x20eu\x20fugiat\x20nulla\x20pariatur.\x20Excepteur\x20sint\x20occaecat\x20cupidatat\x20non\x20proident,\x20sunt\x20in\x20culpa\x20qui\x20officia\x20deserunt\x20mollit\x20anim\x20id\x20est\x20laborum.\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20
\x0a\x0a','querySelectorAll','lineMax','toString','(they|their)','ugc_visibility_private','isBetween','mdivide_left_spd','$CloudAccountName','validate','selectedEditorObjects','border-top','𝕧','emitNumericEntity','diag_deltaTime','ÙÚÛÜùúûüŨũŪūŬŭŮůŰűŲųƯưƱƲǓǔǕǖǗǘǙǚǛǜȔȕȖȗɄΰυϋύ','LocatorPane','
\x0a','(\x5cbReturn\x5cb)','\x5cn_{4,}$','MoleculeValue','treeIndex','SocketReadMessage','(?:-|','brewery','[A-Za-z_\x5cu00A1-\x5cuFFFF][A-Za-z_0-9\x5cu00A1-\x5cuFFFF]*','InterpolatingFunction','infoPanels','[(all|both)]\x20#Determiner\x20#Noun','TooltipDelay','tokenize','better','intro','CanonicalizeRegion','uplevel','Int64Fmt','pos','Retrieving\x20relevant\x20sentences','HistogramPointDensity','FindIntegerNullVector','tile_mirror','the\x20#Cardinal\x20[%Adj|Noun%]','findstr',')\x5c.?|(','etat','rtrim','DefaultControlPlacement','SetOutPath','delete',').\x20','[À-ʸa-zA-Z_$][À-ʸa-zA-Z_$0-9]*','that\x27s','billion','missileTarget','addWords','GraphicsRow','MONITOR','c_purple','physics_fixture_set_linear_damping','FilledCurveBox','ninetieth','array_create','FindEdgeCover','rindex','Serial','rotateX','DynamicModuleParent','TetrahedronBoxOptions','NumberQ','assignedVehicleRole','computation-expression','cholesky_decompose','$CloudCreditsAvailable','highscore_add','FailureDistribution','StandardDeviation','addPlayerScores','([^\x5c\x5c:=\x20\x5ct\x5cf\x5cn]|\x5c\x5c.)+','WiFiServer','set3DENIconsVisible','then','ConnectionSettings','MathieuGroupM24','setGroupIcon','Sinr','really-like','EthernetUDP','GAUSS','MathieuCharacteristicB','createShape','\x5cbend\x5csif\x5cb','IconData','9km','children','#NumericValue\x20#NumericValue','AudioEncoding','minus-value','modelToWorldWorld','c_gray','EdgeDashing','lds','~~~+[\x20]*$','GeometricMeanFilter','LeftTriangle','color_get_saturation','Pop','NormalizationLayer','CAA','file_text_write_real','PEDllCharacteristics','classPrefix','_automatic_','ASIN','endsWithParent','memcpy','doFire','c_funptr','createOutput','grouping','AsymptoticProduct','dot_product_3d_normalised','[a-zA-Z_][\x5cda-zA-Z_]+\x5c.[\x5cda-zA-Z_]{1,3}','`[A-Z][\x5cw\x27]*','RecurrenceTable','#Value\x20[(foot|feet)]','DGET','eng','had','^#PresentTense$','LEFT$','endspecify','ReentrantLock','getConnectedUAVUnit','border-image-source','getArtilleryETA','NotTildeTilde','HalfPlane','setcomp','DeBruijnGraph','%EF%BF%BD','#Comparative','CurlyQuote','⥥','event_user','text_special','PrimeOmega','ctrlMapMouseOver','CellEditDuplicate','setTriggerArea','PaneBoxOptions','part_type_color_mix','StepRange','onBriefingNotes','package','latter','enableGunLights','Direction','MultiaxisArrangement','StieltjesGamma','tilemap_get_at_pixel','BoundaryDiscretizeGraphics','marketing','CSNG','dtanh','unchecked','ј','IncludeQuantities','SpheroidalS1','tilemap_y','None','UInt32','visit','NumericalSort','⊃⃒','Û','tryUnbox','would-be','#Noun+\x20(coach|chef|king|engineer|fellow|personality|boy|girl|man|woman|master)','$1$1$1','filename_drive','mp_grid_path','NNP','rnd','DownArrowUpArrow','cathedral','EditButtonSettings','request','imaginary','HEAD','HighlightGraph','gymnasium','BracketingBar','anti','pathname','push_local_notification','clearItemPool','1:aed,fed,xed,hed¦2:sged,xted,wled,rped,lked,kied,lmed,lped,uped,bted,rbed,rked,wned,rled,mped,fted,mned,mbed,zzed,omed,ened,cked,gned,lted,sked,ued,zed,nted,ered,rted,rmed,ced,sted,rned,ssed,rded,pted,ved,cted¦3:cled,eined,siped,ooned,uked,ymed,jored,ouded,ioted,oaned,lged,asped,iged,mured,oided,eiled,yped,taled,moned,yled,lit,kled,oaked,gled,naled,fled,uined,oared,valled,koned,soned,aided,obed,ibed,meted,nicked,rored,micked,keted,vred,ooped,oaded,rited,aired,auled,filled,ouled,ooded,ceted,tolled,oited,bited,aped,tled,vored,dled,eamed,nsed,rsed,sited,owded,pled,sored,rged,osed,pelled,oured,psed,oated,loned,aimed,illed,eured,tred,ioned,celled,bled,wsed,ooked,oiled,itzed,iked,iased,onged,ased,ailed,uned,umed,ained,auded,nulled,ysed,eged,ised,aged,oined,ated,used,dged,doned¦4:ntied,efited,uaked,caded,fired,roped,halled,roked,himed,culed,tared,lared,tuted,uared,routed,pited,naked,miled,houted,helled,hared,cored,caled,tired,peated,futed,ciled,called,tined,moted,filed,sided,poned,iloted,honed,lleted,huted,ruled,cured,named,preted,vaded,sured,talled,haled,peded,gined,nited,uided,ramed,feited,laked,gured,ctored,unged,pired,cuted,voked,eloped,ralled,rined,coded,icited,vided,uaded,voted,mined,sired,noted,lined,nselled,luted,jured,fided,puted,piled,pared,olored,cided,hoked,enged,tured,geoned,cotted,lamed,uiled,waited,udited,anged,luded,mired,uired,raded¦5:modelled,izzled,eleted,umpeted,ailored,rseded,treated,eduled,ecited,rammed,eceded,atrolled,nitored,basted,twined,itialled,ncited,gnored,ploded,xcited,nrolled,namelled,plored,efeated,redited,ntrolled,nfined,pleted,llided,lcined,eathed,ibuted,lloted,dhered,cceded¦3ad:sled¦2aw:drew¦2ot:hot¦2ke:made¦2ow:hrew,grew¦2ose:hose¦2d:ilt¦2in:egan¦1un:ran¦1ink:hought¦1ick:tuck¦1ike:ruck¦1eak:poke,nuck¦1it:pat¦1o:did¦1ow:new¦1ake:woke¦go:went','smooth','𝔧','UniformGraphDistribution','growRight','c_int32_t','XML','Ramp','matrix','∓','facebook_dialog','^(#Country|#Region)','_n_','network_config_disable_reliable_udp','Ћ','bMarks','TestReport','barrier','cause-cuz','asset_tiles','IsSelfIntersecting','Protected','ManifestMaxVersionTested','Conjugate','DiscreteInputOutputModel','TotalWidth','StringContainsQ','bezierDetail','#NumberRange','bool\x20cdouble\x20cent\x20cfloat\x20char\x20creal\x20dchar\x20delegate\x20double\x20dstring\x20float\x20function\x20idouble\x20ifloat\x20ireal\x20long\x20real\x20short\x20string\x20ubyte\x20ucent\x20uint\x20ulong\x20ushort\x20wchar\x20wstring','','setTriggerInterval','period','argument3','^ok','#Value\x20[%Plural|Verb%]','part_system_create','VarianceEstimatorFunction','instance_id_get','complex_schur_decompose','adorable-little-store','log1p_exp','ItemStyle','ľ','alternate','secondaryWeaponItems','atomic_cancel','nodefault','overlaps','bill-de-noun','overflow-wrap','#TextValue','background_showcolor','SystemsModelStateFeedbackConnect','achievement_show_leaderboards','getVehicleCargo','buffer_u16','FacialFeatures','TRUNC','GraphQ','leaderboardsRequestUploadScoreKeepBest','must-win','setprotoent','#Determiner\x20[(western|eastern|northern|southern|central)]\x20#Noun','DefaultDuration','[a-zA-Z_]\x5cw*::','linsert|10','^(={1,6})[\x20\x09].+?([\x20\x09]\x5c1)?$','(\x5cs*,\x5cs*','GaugeStyle','LibraryFunction','≪','\x20 \x20\x20\x20\x20\x20','usableFromInline','norm1','list-style-image','musee','StringJoin','GroupElementToWord','WeierstrassEta3','MissingException','strokeJoin','↪','RangeSpecification','parray','Transpose','ScriptLevel','scroll-target','↕','safeZoneXAbs','c_float_complex','DoubleUpArrow','(|\x5c*=|\x5c+=|-=|/\x5c*|\x5c*/|\x5c(\x5c*|\x5c*\x5c))','AlgebraicNumber','file_relative','ListLogLinearPlot','had-to-noun','case-lambda\x20call/cc\x20class\x20define-class\x20exit-handler\x20field\x20import\x20inherit\x20init-field\x20interface\x20let*-values\x20let-values\x20let/ec\x20mixin\x20opt-lambda\x20override\x20protect\x20provide\x20public\x20rename\x20require\x20require-for-syntax\x20syntax\x20syntax-case\x20syntax-error\x20unit/sig\x20unless\x20when\x20with-syntax\x20and\x20begin\x20call-with-current-continuation\x20call-with-input-file\x20call-with-output-file\x20case\x20cond\x20define\x20define-syntax\x20delay\x20do\x20dynamic-wind\x20else\x20for-each\x20if\x20lambda\x20let\x20let*\x20let-syntax\x20letrec\x20letrec-syntax\x20map\x20or\x20syntax-rules\x20\x27\x20*\x20+\x20,\x20,@\x20-\x20...\x20/\x20;\x20<\x20<=\x20=\x20=>\x20>\x20>=\x20`\x20abs\x20acos\x20angle\x20append\x20apply\x20asin\x20assoc\x20assq\x20assv\x20atan\x20boolean?\x20caar\x20cadr\x20call-with-input-file\x20call-with-output-file\x20call-with-values\x20car\x20cdddar\x20cddddr\x20cdr\x20ceiling\x20char->integer\x20char-alphabetic?\x20char-ci<=?\x20char-ci\x20char-ci=?\x20char-ci>=?\x20char-ci>?\x20char-downcase\x20char-lower-case?\x20char-numeric?\x20char-ready?\x20char-upcase\x20char-upper-case?\x20char-whitespace?\x20char<=?\x20char\x20char=?\x20char>=?\x20char>?\x20char?\x20close-input-port\x20close-output-port\x20complex?\x20cons\x20cos\x20current-input-port\x20current-output-port\x20denominator\x20display\x20eof-object?\x20eq?\x20equal?\x20eqv?\x20eval\x20even?\x20exact->inexact\x20exact?\x20exp\x20expt\x20floor\x20force\x20gcd\x20imag-part\x20inexact->exact\x20inexact?\x20input-port?\x20integer->char\x20integer?\x20interaction-environment\x20lcm\x20length\x20list\x20list->string\x20list->vector\x20list-ref\x20list-tail\x20list?\x20load\x20log\x20magnitude\x20make-polar\x20make-rectangular\x20make-string\x20make-vector\x20max\x20member\x20memq\x20memv\x20min\x20modulo\x20negative?\x20newline\x20not\x20null-environment\x20null?\x20number->string\x20number?\x20numerator\x20odd?\x20open-input-file\x20open-output-file\x20output-port?\x20pair?\x20peek-char\x20port?\x20positive?\x20procedure?\x20quasiquote\x20quote\x20quotient\x20rational?\x20rationalize\x20read\x20read-char\x20real-part\x20real?\x20remainder\x20reverse\x20round\x20scheme-report-environment\x20set!\x20set-car!\x20set-cdr!\x20sin\x20sqrt\x20string\x20string->list\x20string->number\x20string->symbol\x20string-append\x20string-ci<=?\x20string-ci\x20string-ci=?\x20string-ci>=?\x20string-ci>?\x20string-copy\x20string-fill!\x20string-length\x20string-ref\x20string-set!\x20string<=?\x20string\x20string=?\x20string>=?\x20string>?\x20string?\x20substring\x20symbol->string\x20symbol?\x20tan\x20transcript-off\x20transcript-on\x20truncate\x20values\x20vector\x20vector->list\x20vector-fill!\x20vector-length\x20vector-ref\x20vector-set!\x20with-input-from-file\x20with-output-to-file\x20write\x20write-char\x20zero?','fb_login_default','reps','SPI','border','WeekDay','$Cookies','Ì','sync_reject_on','CartesianIndex','vk_pagedown','(#City|#Region|#ProperNoun)$','ev_gesture_dragging','tpl_host_no_ip_fuzzy','BarabasiAlbertGraphDistribution','camSetDir','Method','MenuStyle','0x[0-9a-f]+','RightArrowBar','FileExtension','ev_joystick2_button3','does\x20(#Adverb|not)?\x20[#Adjective]','⥒','(say|says|said)\x20[sorry]','nearEntities','(_?[ui](8|16|32|64|128))?','[<=$]','♠','tile_index_mask','⪠','band','⦮','(urban|cardiac|cardiovascular|respiratory|medical|clinical|visual|graphic|creative|dental|exotic|fine|certified|registered|technical|virtual|professional|amateur|junior|senior|special|pharmaceutical|theoretical)+\x20#Noun?\x20#Actor','setUserActionText','#Adverb\x20[half]','virtual_key_delete','(VC|VS|#)','$rose_gclk','interface\x20extends','offsets','shownSubtitles','Use\x20this\x20question:\x20','SliceDistribution','$SystemMemory','pauseVideo','StrCmpS','part_emitter_stream','isize','ConvertDirection','scroll-padding-inline-end','SpectralLineData','PhrasalVerb','ReplacePart','buffer_load','TreeExtract','UNLOCK','ColonForm','Hypergeometric0F1','ACCRINTM','NumberPadding','MINIFS','collision_line_list','Trim','SolidAngle','ifNo','ev_mouse_leave','FALSE','SystemModels','border-block-start-width','ward','let','addMagazineTurret','DiscreteWaveletPacketTransform','infoPanelComponents','electricity','recv','every','vk_f12','dot_product_3d_normalized','escarpment','PolynomialMod','𝔅','DegreeCentrality','SYMBOL_CHARSET','trace','cmpres','grid-column-end','^did\x20#Infinitive$','CalendarType','EVEN','ef_firework','Delphi','instance_deactivate_object','frameset','EulerMatrix','grid-template-areas','CenterDot','render','RangeError','ugc_match_AllGuides','LogisticSigmoid','Matrix','⫧','⇔','SelfLoopStyle','allTurrets','𝓆','Sound','mkdown','InverseRadonTransform','face-shocking','vinarray','G-code\x20(ISO\x206983)','(he|she|we|you|they|i)','STDEVPA','isGameFocused','instance_activate_object','weapons','AxesOrigin','libraryCredits','ctrlSetFocus','NewPrimitiveStyle','either','updateSlider','RudinShapiro','bacilli','ev_joystick1_button4','⌞','device_mouse_raw_y','^(that|this|those)','PadeApproximant','\x5c$/','#Value\x20and\x20#Value\x20#Fraction','background-blend-mode','#(x|X)[0-9a-fA-F]+(/[0-9a-fA-F]+)?','interrupt','Guid','caption','RecognitionThreshold','ev_global_left_release','$assertvacuousoff','Thread','LightMagenta','-?\x5cw+\x5cs*=>','PSET','tie','RowReduce','displayRemoveAllEventHandlers','expr','url_encode','writing-mode','GradientOrientationFilter','groovy','allowFileOperations','UNDERSCORE_TITLE_MODE','⪅','PLUGINSDIR','Tilde','audio_get_listener_mask','Iterator','asset|0','MoonPhase','databases','GroupTogetherGrouping','ropeUnwind','TransferFunctionZeros','GroupOpenerColor','$VersionNumber','readln','aspectj','font-semibold','draw_get_alpha','≂̸','iap_ev_storeload','lpt2','ctrlAutoScrollDelay','Clong','$UserAgentString','is_int64','slot','nav-down','Week','IMP','add3DENLayer','PolynomialQuotient','visiblePosition','⌣','𝒥','CreateFont','physics_particle_group_get_centre_x','Star','ev_joystick1_button7','enableSatNormalOnDetail','⪍','do\x20[so]','for-some-reason','Б','phy_bullet','⏝','GeoGridRangePadding','http_post_string','MessageObject','⩆','(this|that|#Comparative|#Superlative)','1-800-','EdgeJoinForm','c_loc','capture','ds_priority_delete_min','Ď','\x22\x20class=\x22bg-blue-500\x20hover:bg-blue-700\x20text-white\x20font-bold\x20py-2\x20px-4\x20rounded-r-md\x22>Search\x0a
\x20What\x20is\x20SocratiQ?\x20
\x20Information\x20provided\x20here\x20may\x20not\x20always\x20be\x20accurate.\x20Provide\x20feedback\x20
\x20\x20An\x20error\x20occurred.\x20Please\x20try\x20again.\x20
\x20