演算法介紹
找出vector中「大於或等於」val的「最小值」的位置:
auto it = lower_bound(v.begin(), v.end(), val);
找出vector中「大於」val的「最小值」的位置:
auto it = upper_bound(v.begin(), v.end(), val);
以下為sort輸入的數字,並且找出對應find的upper_bound位置。(lower_bound的用法與upper_bound相同)
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int main()
{
int n;
while(cin>>n){
vector <int> up;
for(int i=0;i<n;i++){
int a;
cin>>a;
up.push_back(a);
}
sort(up.begin(),up.end());
int find;
cin>>find;
int ans=upper_bound(up.begin(),up.end(),find)-up.begin();//減去begin()是為了取得vector中的位置
cout<< ans <<endl;
}
}
用法與vector類似但有些必須注意的細節,以下為APCS 2021 1月第三題 切割費用 的範例。
set 的 upper_bound當中是直接放 value,這是與vector最不一樣的地方。(lower_bound的用法與upper_bound相同)
#include <iostream>
#include <vector>
#include <algorithm>
#include <set>
using namespace std;
int n,l;
vector <pair<int,int>> knife;
bool cmp(pair<int,int> a,pair<int,int> b){
return a.second<b.second;
}
int main() {
while(cin>>n>>l){
for(int i=0;i<n;i++){
int a,b;
cin>>a>>b;
knife.push_back({a,b});
}
sort(knife.begin(),knife.end(),cmp);
long long ans=0;
set <int> count;
count.insert(0);
count.insert(l);
for(int i=0;i<n;i++){
count.insert(knife[i].first);
auto it=count.upper_bound(knife[i].first);
it--;
ans += *next(it)-*prev(it);
}
cout<<ans<<endl;
}
}
Peter Wang
Thu, Aug 12, 2021 10:15 PM
Request for GPU
Mar 14, 2025Introduction This article delves into the architecture and mechanics of decoder-only transformers, which are a crucial component of many large language models (LLMs). It highlights the structure, attention mechanisms, and embedding techniques that make these models effective for various natural language processing (NLP) tasks. Decoder-Only Transformer Architecture Overview Decoder-only transformers, unlike the traditional encoder-decoder structure, use only the decoder component to process and generate text. This architecture is particularly suited for tasks that involve sequential generation, such as text completion and language modeling. Structure The decoder-only transformer consists of multiple layers, each containing self-attention mechanisms and feed-forward neural networks.
Jun 26, 2024Introduction This article provides a visual and intuitive explanation of the transformer architecture, which has revolutionized natural language processing (NLP) by enabling efficient handling of sequential data through self-attention mechanisms. It covers the structure, mechanics, and key components such as the encoder, decoder, attention mechanisms, and embeddings. Transformer Architecture Overview The transformer architecture, introduced by Vaswani et al. in 2017, eliminates the need for recurrent layers by using self-attention mechanisms, allowing for parallel processing and better handling of long-range dependencies. Encoder-Decoder Structure The transformer model consists of an encoder-decoder architecture, each composed of multiple layers.
Jun 26, 2024This article offers a detailed explanation of transformers, a revolutionary architecture in natural language processing (NLP) that has significantly advanced the capabilities of large language models (LLMs). It covers the structure, mechanics, and key components of transformers, including the encoder, decoder, attention mechanisms, and embeddings.
Jun 26, 2024or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up