Verkäufer
GreatBookPricesUK, Woodford Green, Vereinigtes Königreich
Verkäuferbewertung 5 von 5 Sternen
AbeBooks-Verkäufer seit 28. Januar 2020
Unread book in perfect condition. Bestandsnummer des Verkäufers 16120778
Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches describes basic principles and recent developments in building approximate synopses (i.e., lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data. The monograph focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a "representative" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries. A histogram summarizes a data set by grouping the data values into subsets, or "buckets," and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector - i.e., as a function defined on the set {0, 1, 2, . . . , M-1} - and the wavelet transform of this function is found as a weighted sum of wavelet "basis functions." The weights, or coefficients, can then be "thresholded", for example, by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed. Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data.
Reseña del editor: Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches describes basic principles and recent developments in building approximate synopses (i.e., lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data. The monograph focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a "representative" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries. A histogram summarizes a data set by grouping the data values into subsets, or "buckets," and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector - i.e., as a function defined on the set {0, 1, 2, . . . , M-1} - and the wavelet transform of this function is found as a weighted sum of wavelet "basis functions." The weights, or coefficients, can then be "thresholded", for example, by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed. Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data.
Titel: Synopses for Massive Data : Samples, ...
Verlag: Now Publishers
Erscheinungsdatum: 2011
Einband: Softcover
Zustand: As New
Anbieter: Rarewaves USA, OSWEGO, IL, USA
Paperback. Zustand: New. Synopses for Massive Data describes basic principles and recent developments in building approximate synopses (that is, lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data.The book focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a ""representative"" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.A histogram summarizes a data set by grouping the data values into subsets, or ""buckets"", and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector-i.e., as a function defined on the set {0,1,2, ,M?1}-and the wavelet transform of this function is found as a weighted sum of wavelet ""basis functions"". The weights, or coefficients, can then be ""thresholded"", e.g., by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed.Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data. Bestandsnummer des Verkäufers LU-9781601985163
Anzahl: Mehr als 20 verfügbar
Anbieter: THE SAINT BOOKSTORE, Southport, Vereinigtes Königreich
Paperback / softback. Zustand: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 931. Bestandsnummer des Verkäufers C9781601985163
Anzahl: Mehr als 20 verfügbar
Anbieter: Rarewaves USA United, OSWEGO, IL, USA
Paperback. Zustand: New. Synopses for Massive Data describes basic principles and recent developments in building approximate synopses (that is, lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data.The book focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a ""representative"" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.A histogram summarizes a data set by grouping the data values into subsets, or ""buckets"", and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector-i.e., as a function defined on the set {0,1,2, ,M?1}-and the wavelet transform of this function is found as a weighted sum of wavelet ""basis functions"". The weights, or coefficients, can then be ""thresholded"", e.g., by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed.Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data. Bestandsnummer des Verkäufers LU-9781601985163
Anzahl: Mehr als 20 verfügbar
Anbieter: moluna, Greven, Deutschland
Zustand: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Describes basic principles and recent developments in building approximate synopses (that is, lossy, compressed representations) of massive data. The book focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. Bestandsnummer des Verkäufers 448142494
Anzahl: Mehr als 20 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9781601985163_new
Anzahl: Mehr als 20 verfügbar
Anbieter: Rarewaves.com UK, London, Vereinigtes Königreich
Paperback. Zustand: New. Synopses for Massive Data describes basic principles and recent developments in building approximate synopses (that is, lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data.The book focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a ""representative"" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.A histogram summarizes a data set by grouping the data values into subsets, or ""buckets"", and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector-i.e., as a function defined on the set {0,1,2, ,M?1}-and the wavelet transform of this function is found as a weighted sum of wavelet ""basis functions"". The weights, or coefficients, can then be ""thresholded"", e.g., by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed.Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data. Bestandsnummer des Verkäufers LU-9781601985163
Anzahl: Mehr als 20 verfügbar
Anbieter: Rarewaves.com USA, London, LONDO, Vereinigtes Königreich
Paperback. Zustand: New. Synopses for Massive Data describes basic principles and recent developments in building approximate synopses (that is, lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data.The book focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches. A random sample comprises a ""representative"" subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.A histogram summarizes a data set by grouping the data values into subsets, or ""buckets"", and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in the context of image and signal processing. The data set is viewed as a set of M elements in a vector-i.e., as a function defined on the set {0,1,2, ,M?1}-and the wavelet transform of this function is found as a weighted sum of wavelet ""basis functions"". The weights, or coefficients, can then be ""thresholded"", e.g., by eliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed.Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem. Synopses for Massive Data describes and compares the different synopsis methods. It also discusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data. Bestandsnummer des Verkäufers LU-9781601985163
Anzahl: Mehr als 20 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches describes basic principlesand recent developments in building approximate synopses (i.e., lossy, compressed representations) of massive data. Such synopses enable approximate query processing, in which the user's query is executed against the synopsis instead of the original data. The monograph focuses on the four main families of synopses: random samples, histograms, wavelets, and sketches.A random sample comprises a 'representative' subset of the data values of interest, obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.A histogram summarizes a data set by grouping the data values into subsets, or 'buckets,' and then, for each bucket, computing a small set of summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have been incorporated into the query optimizers of virtually all commercial relational DBMSs.Wavelet-based synopses were originally developed in the context of image and signal processing.The data set is viewed as a set of M elements in a vector - i.e., as a function defined on the set {0, 1, 2, . . . , M-1} - and the wavelet transform of this function is found as a weighted sum ofwavelet 'basis functions.' The weights, or coefficients, can then be 'thresholded', for example, byeliminating coefficients that are close to zero in magnitude. The remaining small set of coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales.Sketch summaries are particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both inserted and removed. Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem.Synopses for Massive Data describes and compares the different synopsis methods. It alsodiscusses the use of AQP within research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on massive data. Bestandsnummer des Verkäufers 9781601985163
Anzahl: 1 verfügbar