Rewanth Tammana is a security ninja, open-source contributor, independent consultant & SME at Uptycs. Previously, Senior Security Architect at Emirates NBD. Passionate about DevSecOps, Application, and Container Security. Added 17,000+ lines of code to Nmap. Holds industry certifications like CKS, CKA, etc.
https://twitter.com/rewanthtammana
Speaker & trainer at international security conferences worldwide including Black Hat, Defcon, Hack In The Box (Dubai and Amsterdam), CRESTCon UK, PHDays, Nullcon, Bsides, CISO Platform, null chapters and multiple others.
https://linkedin.com/in/rewanthtammana
One of the MVP researchers on Bugcrowd (2018) and identified vulnerabilities in several organizations. Published an IEEE research paper on an offensive attack in Machine Learning and Security. Also, part of the renowned Google Summer of Code program.
Demo - Create a short story using a pre-trained model
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_name = "distilgpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
story_generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "On a sunny day in Paris,"
result = story_generator(prompt, max_length=300, do_sample=True, top_k=50, top_p=0.95, num_return_sequences=1)
print(result[0]['generated_text'])
The power of advanced models in Hugging Face.
Demo - Generating an image from a text description.
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
pipeline = pipeline.to("mps") # cpu, cuda, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone
# Recommended if you have 8/16 GB RAM
pipeline.enable_attention_slicing()
prompt = "a photo of an astronaut riding a horse on mars"
_ = pipeline(prompt,num_inference_steps=1)
images = pipeline(prompt).images
for index, image in enumerate(images):
image.save("image{0}.jpg".format(index))
Look at picture from your ML model POV
Demo - Writing a simple Python function with the help of Hugging Face.
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/santacoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint, trust_remote_code=True).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Write to me on:
Google: Rewanth Tammana
Website: rewanthtammana.com
Twitter: @rewanthtammana
LinkedIn: /in/rewanthtammana