# ML for Computer Security: Exam questions
## Introduction
* What are the three main security goals?
* What is part of the “Security Cycle”? Why is there an imbalance? Name one general technique from the lecture that helps to counter that imbalance. Where in the security cycle does it help?
* Name challenges of Machine Learning for Computer Security.
## Machine Learning 101
* What is a training and a testing set. What is the important requirement for these sets?
* What is loss? Name 3 loss functions. For which kind of learning problem can the hinge loss be used? How must the labels be set to achieve meaningful loss with this function?
* What is risk? How is expected risk defined? How can it be approximated?
* What is underfitting? What is overfitting? Give an example of how to constrain the complexity of a model.
* Does low empirical risk imply low expected risk? What issue can arise? How can this issue be mitigated?
* Given a set $A=X\times Y$ of samples with labels. Explain what k-fold cross validation is. What is it used for? Draw a sketch for $k=4$.
* Are XOR, AND, OR linearly separable? Why or why not?
* Name the two main learning model concepts. What are the differences? In the context of Security, what are typical applications of each?
## From Data to Features
* Why are numerical features often not sufficient in the context of computer security?
* How are strings mapped to feature space? How is the mapping function defined? What are methods for coming up with a formal language for this mapping to work?
* What are advantages and disadvantages of feature hashing?
* Describe different methods of processing strings for feature extraction. Name at least one where knowing a delimiter character is required and one where this is not required.
* What are the advantages of normalizing data (in the context of features)? Give formulas of two different approaches.
* What properties does a function need to have in order to be considered a kernel?
* Show that $|| \phi (x) - \phi(z)||_{2} = (k(x, x) + k(z, z) - 2k(x, z))^{0.5}$.
* What makes a kernel function so useful?
* Compute the similarity score of "implementation" and "installation" with n=2 grams and the dot product.
## String Processing
* Explain what a count-min sketch is. How are values inserted? How are values retrieved? What guarantee is there to the values retrieved?
* What is the difference between a Bloom Filter and a Count-Min-Sketch?
* What makes suffix trees the *swiss army knife* of string analysis?
* Why is LCS not a good idea to use on a real world dataset? Explain with an example. What would be a better method?
* Name three different kinds of signatures.
* Name at least one way in which an attacker can make the signature generation algorithm less effective.
* What are common steps towards creating signatures?
* Are signatures robust?
## Anomaly Detection
* Briefly explain the three/four types of features presented in the lecture
* Given a one class SVM, $X$ a set of benign data, $\mu$ the center of the hyper-sphere. State the optimization problem. What would you do in the real world to make this robust against outliers? Explain referencing the optimization problem.
* How does the regularization parameter in SVM affect the decision boundary?
* Describe three different approaches for learning-based attack detection.
* What assumptions are made for anomaly detection?
* What geometric algorithms have we seen for modeling normality?
* First give the formula which describes the global model of normality as a center of mass. Then explain in a few words what it intuitively means.
* Give the formula to explicitly compute the anomaly score by distance from the center.
* What are drawbacks of signatures?
* Show that using a kernelized center of mass the anomaly score can be calculated as follows without kowing the explicit definition of $\phi$ : $||\phi(z)-\mu||_2^2=k(z, z)-\frac{2}{n} \sum_{i=1}^{n} k\left(z, x_{i}\right)+\frac{1}{n^{2}} \sum_{i, j=1}^{n} k\left(x_{i}, x_{j}\right)$
## Malware Classification
* What assumptions do we make when it comes to classification for attack detection?
* For malware classification using machine learning you need a lot of samples. Name resources where to get them from. What kind of learning can be done with such data? Give the signature of the learning function.
* Define the decision function of the perceptron as well as the perceptron rule for learning.
* Define the decision function of SVMs and the two-class SVM optimization problem. Define the extended optimization problem which deals with non-linearly separable data.
* Give examples of attacks against classification methods.
* What does the term "support vector" mean in the context of support vector machines? Name a drawback of relying on support vectors.
* What is the difference between a one-class and a two-class SVM? Which is used for what?
## Concept Drift
* What is concept drift?
* What is the confusion matrix? Define precision and recall.
* Is accuracy always a good metric?
* What are the main types of dataset shift? Give one example for each. State the changing distribution.
* What are causes of concept drift?
* What are possible approaches to deal with dataset shift (especially if caused by a non-stationary domain)?
* What classification technique can be helpful in detecting a concept drift in the data? Explain.
## Vulnerability
* What are the two approaches to vulnerability discovery mentioned in the lecture?
* (Finding vulnerabilities referring to similar vuln.)
* (Identifying anormal code patterns)
* How is code mapped to meaningful features? What do we need for this mapping?
* (mapping with sub-trees/ sub-graphs: $\phi(x) = (w_s * occ(s,x))_{s \in S}$)
* (Needed are a structural representation of the Code yielding S and weights which can be e.g. tf, idf or tfidf.)
* Briefly explain the heartbleed bug.
* Name different representations of code as a graph.
* Rank the following (types of) words according to their expected TF-IDF. Explain the rationale behind TF-IDF.
* a word that is common in every document.
* a word that is not present in the current document, but in every other document.
* a word that only occurs in the current document.
* Why do we apply dimensionality reduction?
* What is the idea behind taint-analysis? Is it a method of static or dynamic program analysis?
* (answer to second question: both)
## Fuzzing
* Name different types of fuzzers depending on how the input is generated.
* Name different types of fuzzers depending on the level of awareness of the program structure. Compare the three fuzzer types based on their computation time.
* Which types (referring to the level of awareness) give you a sense of code coverage? How is that realized?
* AFL can be viewed as a genetic algorithm. What is its fitness function?
* What is the/ are some main ideas of AFLfast?
* (powerplan: assign low energy to high-frequency paths => prioritize low freequency paths)
* (exponentially increase the energy(# of fuzz generated from an input) of an input each time it is called from the queue.)
* What is NEUZZ? What is the rugh idea behind it?
* Suggestion:
* NEUZZ (=Neural Programming Smoothing) view Fuzzing as an optimization problem where the number of found bugs (F(x)) for all used input x is maximised.
* It then abstracts (like others, too) from that specific goal and uses the edge coverage (=path coverage/ code coverage?) G(x) for the optimization.
* Lastly, it approximates G(x) with a differential, smooth function H(x) learned by a NN.
* On H(x) the gradient for a certain x can be computed and used to update x/ find the optimal set of input x to maximize H(x).
## Explainable ML
* How can descriptive accuracy be measured?
* Name three advantages of white-box explanations over black-box explanation tools.
* What is LIME?
* What are the two main approaches for black-box explanations? Name and explain them.
* Provide an example where using a global surrogate model might be problematic. How can we solve this?
* How can two explanation methods be compared? What is a good value of this metric and why?
* (Intersection size. Good: close to 1 => more accreditation for the k features, because they were identified as relevant by two different methods.)
* What are useful general and security-related criteria to evaluate explanations?
* What is the pre-image of the hash value `360f6273866be960733428da599e9e63`, mentioned at the bottom right on the "Weekly Q&A Session" slide?
* What is md5 what is hashcat? Explain.
## Adversarial ML
* What is the difference between a sparse and a dense attack?
* Describe the inverse feature-mapping problem.
* Explain the challenges and constraints when coming up with adversarial examples in the field of malware code.
* What methods of detecting adversarial examples and poisoning attacks are there?
* How does obfuscation of a prediction function work? Is it effective? Explain your answer.
* What two basic types of strategies are used to defend against attacks on ML?
* What is security-aware testing?
* Name and explain different types of attacks against machine learning.
* You have successfully derived a surrogate model $\theta^+$. What attacks can you perform with it?
* Why is there no strong security mechanism for ML up da date?
## Comprehensive Questions
You are assigned the task to monitor a production system which uses a very heterogeneous industry 4.0 protocol environment.
* How would you proceed to build a IDS for this environment?
- build an IDS with ML based on anormality detection:
- use binary representations of protocol-messages. Protocols may vary a lot, also, some protocols might not be text but binary based.
- learn normality for each message type/ protocol
Same environment: Your boss now came up with a new malware classification system and is hyped to install it and give all clients in the system access so they can make checks.
He also wants to make clients in the production system check automatically with the system when they receive a new piece of code to execute, wether the code is malware or not.
* Is this a good idea? What could be critical and how could you mitigate the thread?
- generally I'd say this is rather progressive but might actually help to defend against hijacking attacks.
- A problem would be that an infected device can run a model stealing attack against the classifier. A mitigation approach would be to use stateful-application and try to track each device with an ID. Doing so one may identify anormal querying behaviour of a client.
(Ok, the examples are constructed a little ... well, not straight forward, but close to the slides at least. But maybe such comprehensive questions where the question does not point directly to the solution may arise.)
So, good luck everyone!
# Chat
Hi, some comments above mention answers, but I can't find any. Are there any answers to these questions, and if so, where can I find them? Thanks.
Answers are available here: https://wachter-space.de/2020/08/06/mlforsec.html However, I think they are only written by one dude. We cannot participate other than by using comments in this document to give answers / to point out problems with existing answers. -- Christoph
------
Hi, does anyone know whether the mailing list is moderated? I answered the question that came in today right after it was posted but my answer is not visible in the mail archive despite two other answers having arrived by now. Just wanna know whether my answer sucked and therefore wasn't admitted or whether there's a technical problem. -- Christoph
I think the mailing list is modarated. When I send a mail there was a delay until it appeared on the mailing list. However I do not think that mails are not accepted just because they contian an error. I personally think that there can be no answers that suck, but just opportunities to learn. Maybe Prof. Wressnegger missed it or there was a technical problem. -- Liam
Alright, thank you. Imma post my answer here. If there's something wrong with it, please let me know so I can learn. :) -- Christoph
> Hi,
>
> disclaimer: I have no idea ...
>
> ... but I think that this is about not eliminating otherwise useful substrings. Say you have a substring "tomato" that occurs very often. Now you don't want that substring to be eliminated just because "tomatoketchup" makes it over the threshold too. If "tomatoketchup" occurs k times but "tomato" occurs 2k times, "tomato" would've made it if it wasn't for "tomatoketchup". That is: If you eliminate all occurrences of "tomatoketchup", there still are enough occurrences of "tomato" to make it over the threshold.
>
> However, I think that a more useful criterion would be to require that "tomato" occurs at least an additional k times compared to "tomatoketchup". If "tomato" occurred 2k times and "tomatoketchup" 2k-1 times, "tomato" wouldn't be a useful substring at all.
>
> Kind regards
> Christoph