Rewanth Tammana is a security ninja, open-source contributor, independent consultant & SME at Uptycs. Previously, Senior Security Architect at Emirates NBD. Passionate about DevSecOps, Application, and Container Security. Added 17,000+ lines of code to Nmap. Holds industry certifications like CKS, CKA, etc.
https://twitter.com/rewanthtammana
Speaker & trainer at international security conferences worldwide including Black Hat, Defcon, Hack In The Box (Dubai and Amsterdam), CRESTCon UK, PHDays, Nullcon, Bsides, CISO Platform, null chapters and multiple others.
https://linkedin.com/in/rewanthtammana
One of the MVP researchers on Bugcrowd (2018) and identified vulnerabilities in several organizations. Published an IEEE research paper on an offensive attack in Machine Learning and Security. Also, part of the renowned Google Summer of Code program.
Reference - washingtonpost.com
Demo - Create a short story using a pre-trained model
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_name = "distilgpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
story_generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "On a sunny day in Paris,"
result = story_generator(prompt, max_length=300, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1)
print(result[0]['generated_text'])
The power of advanced models in Hugging Face.
Demo - Generating an image from a text description.
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipeline = pipeline.to("mps") # cpu, cuda, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone
# Recommended if you have 8/16 GB RAM
pipeline.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
_ = pipeline(prompt,num_inference_steps=1)
images = pipeline(prompt).images
for index, image in enumerate(images):
image.save("image{0}.jpg".format(index))
https://github.com/AUTOMATIC1111/stable-diffusion-webui
Look at picture from your ML model POV
Demo - Writing a simple Python function with the help of Hugging Face.
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/santacoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Reference - helpnetsecurity.com
Reference - helpnetsecurity.com
Write to me on:
Google: Rewanth Tammana
Website: rewanthtammana.com
Twitter: @rewanthtammana
LinkedIn: /in/rewanthtammana
or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing