Elek, Márton

@elek

Joined on Nov 8, 2018

  • The goal here is measure the maximum possible download speed from one single Storagenode using only one single thread. Motivation: To achieve decent performance we usually suggest to use multiple parallel uploads/downloads. From an offline conversation with @littleskunk, I learned that: We don't really know what is the exact limitation of single-thread downloads Previous performance test showed that we have some kind of bottleneck as we couldn't fully utilize download bandwidth Initial profiling showed that bottleneck is the lower chunk-size as we need separated RPC calls for each chunk (including separated signature check/calculation!!!).
     Like  Bookmark
  • https://en.wikipedia.org/wiki/In_Search_of_Lost_Time https://en.wikipedia.org/wiki/Hungarian_prehistory https://en.wikipedia.org/wiki/Hungarian_prehistory#/media/File:Migration_of_Hungarians.jpg 955. Augsburg https://en.wikipedia.org/wiki/Battle_of_Lechfeld
     Like  Bookmark
  • This test creates a new piece store (pieces.NewStore) and tries to write as many 2.3 Mb files as possible. Variables: test is executed both with SHA-256 hashing (current) and BLAKE (planned) test is executed both with/without disc sync before the rename of the commit 2.3 Mb is chosen because the 64 Mb block size and 29 EC parameter results with 2.3 Mb piece size. Results
     Like  Bookmark
  • Introduction This document explores how we can improve the Ozone volume semantics especially with respect to the S3 compatibility layer. The Problems Unpriviliged users cannot enumerate volumes. The mapping of S3 buckets to Ozone volumes is confusing. Based on external feedback it's hard to understand the exact Ozone URL to be used. The volume name is not friendly and cannot be remembered by humans. Ozone buckets created via the native object store interface are not visible via the S3 gateway. We don't support the revocation of access keys.
     Like  Bookmark
  • # Ozone Client with and Ozone FileSystem support with older Hadop versions (2.x / 3.1) Apache Hadoop Ozone is a Hadoop subproject. It depends on the released Hadoop 3.2. But as Hadoop 3.2 is very rare in production, older versions should be supported to make it possible to work together with Spark, Hive, HBase and older clusters. ## The problem We have two separated worlds: the client and the server side. The server can have any kind of dependencies as the classloaders of the server are usual
     Like  Bookmark
  • The dark side of the classloader magic === Let's say you have two classloaders in Java: 1. Classloader #1 which loads all the `java.*` and `javax.*` classes 2. Classloader #2 which loads all the `org.apache.hadoop.*` classes Classloader #1 is the __parent__ classloader of Classloader #2. When you use classloader #2 to load a class (class X): 1. If class is available from #1, it will be loaded from there (_"parent-first"_!) 2. If not, the classloader #2 will try to load it ## Filtered C
     Like  Bookmark
  • --- title: Ozone Enhancement Proposals summary: Definition of the process to share new technical proposals with the Ozone community. --- ## Problem statement Some of the biggers features requires well defined plans before the implementation. Until now it was managed by uploading PDF design docs to selected JIRA. There are multiple problems with the current practice. 1. There is no easy way to find existing up-to-date and outdated design docs. existing up-to-date and outdated 2. Design docs u
     Like  Bookmark