Sparse dictionary learning
http://dbpedia.org/resource/Sparse_dictionary_learning
Sparse coding is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation.
rdf:langString
稀鬆字典學習是一種表徵學習方法,其目的在於找出一組基本元素讓輸入訊號映射到這組基本元素時具有稀鬆表達式。我們稱這些基本元素為“原子”,這些原子的組合則為“字典”。字典裡的“原子”並不需要滿足正交基這一特性,且往往它們會是過完備的生成集合。過多的原子除了可以讓我們在敘述一個訊號的時候可以由很多種表達式,同時也提升了整個表達式的稀鬆性,讓我們可以以較簡單的表達式來詮釋訊號。 稀鬆字典學習最主要應用在壓縮感知及訊號還原上。在壓縮感知上,當你的訊號具有稀鬆或者接近稀鬆特質時,那麼只需要對訊號進行幾次的隨機取樣就可以把高維度的訊號描述出來。但在現實世界中,並不是全部訊號都具有稀鬆這一特性,所以我們需要把找出這些訊號的稀鬆表達式,轉換方式有很多種,根據不同的訊號有不同的轉換方式。當高維度的訊號轉換至稀鬆訊號是,那麼就可以透過少次數的線性取樣,並利用一些還原演算法如:基追踪(Basis Pursuit)、CoSaMP、正交匹配追踪(Orthogonal Matching Pursuit)等方法來對訊號進行還原。
rdf:langString
rdf:langString
Sparse dictionary learning
rdf:langString
稀鬆字典學習
xsd:integer
48813654
xsd:integer
1096827280
rdf:langString
Sparse coding is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These elements are called atoms and they compose a dictionary. Atoms in the dictionary are not required to be orthogonal, and they may be an over-complete spanning set. This problem setup also allows the dimensionality of the signals being represented to be higher than the one of the signals being observed. The above two properties lead to having seemingly redundant atoms that allow multiple representations of the same signal but also provide an improvement in sparsity and flexibility of the representation. One of the most important applications of sparse dictionary learning is in the field of compressed sensing or signal recovery. In compressed sensing, a high-dimensional signal can be recovered with only a few linear measurements provided that the signal is sparse or nearly sparse. Since not all signals satisfy this sparsity condition, it is of great importance to find a sparse representation of that signal such as the wavelet transform or the directional gradient of a rasterized matrix. Once a matrix or a high dimensional vector is transferred to a sparse space, different recovery algorithms like basis pursuit, CoSaMP or fast non-iterative algorithms can be used to recover the signal. One of the key principles of dictionary learning is that the dictionary has to be inferred from the input data. The emergence of sparse dictionary learning methods was stimulated by the fact that in signal processing one typically wants to represent the input data using as few components as possible. Before this approach the general practice was to use predefined dictionaries (such as Fourier or wavelet transforms). However, in certain cases a dictionary that is trained to fit the input data can significantly improve the sparsity, which has applications in data decomposition, compression and analysis and has been used in the fields of image denoising and classification, video and audio processing. Sparsity and overcomplete dictionaries have immense applications in image compression, image fusion and inpainting.
rdf:langString
稀鬆字典學習是一種表徵學習方法,其目的在於找出一組基本元素讓輸入訊號映射到這組基本元素時具有稀鬆表達式。我們稱這些基本元素為“原子”,這些原子的組合則為“字典”。字典裡的“原子”並不需要滿足正交基這一特性,且往往它們會是過完備的生成集合。過多的原子除了可以讓我們在敘述一個訊號的時候可以由很多種表達式,同時也提升了整個表達式的稀鬆性,讓我們可以以較簡單的表達式來詮釋訊號。 稀鬆字典學習最主要應用在壓縮感知及訊號還原上。在壓縮感知上,當你的訊號具有稀鬆或者接近稀鬆特質時,那麼只需要對訊號進行幾次的隨機取樣就可以把高維度的訊號描述出來。但在現實世界中,並不是全部訊號都具有稀鬆這一特性,所以我們需要把找出這些訊號的稀鬆表達式,轉換方式有很多種,根據不同的訊號有不同的轉換方式。當高維度的訊號轉換至稀鬆訊號是,那麼就可以透過少次數的線性取樣,並利用一些還原演算法如:基追踪(Basis Pursuit)、CoSaMP、正交匹配追踪(Orthogonal Matching Pursuit)等方法來對訊號進行還原。 在這整個過程中,關鍵在於如何找到一個轉換方式把訊號轉換到具有稀鬆表達式的域內,也就是如何建立一個字典,讓訊號投影在這個字典上時具有稀鬆表達式。而稀鬆字典學習就是利用學習的方式幫我們找出這個轉換方法,即稀鬆字典。稀鬆字典學習的興起是基於在訊號處理中,如何使用較少的元素來敘述一個訊號。在這之前,普遍上大家還是使用傅立葉轉換(Fourier Transform)及小波轉換(Wavelet Transform)。不過在某一些情境下,使用透過字典學習得到的字典來進行轉換,能有效的提高訊號的稀鬆性。高稀鬆性意味著訊號的可壓縮性越高,因此稀鬆字典學習也被應用在資料分解、壓縮和分析。
xsd:nonNegativeInteger
24081