792,62 €
880,69 €
-10% with code: EXTRA
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
792,62
880,69 €
  • We will send in 10–14 business days.
As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com- puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys- tems), discovering knowledge from data…
880.69
  • Publisher:
  • ISBN-10: 079238198X
  • ISBN-13: 9780792381983
  • Format: 16.3 x 24.3 x 1.9 cm, kieti viršeliai
  • Language: English
  • SAVE -10% with code: EXTRA

Feature Selection for Knowledge Discovery and Data Mining (e-book) (used book) | bookbook.eu

Reviews

(4.00 Goodreads rating)

Description

As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com- puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys- tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ-ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates.

EXTRA 10 % discount with code: EXTRA

792,62
880,69 €
We will send in 10–14 business days.

The promotion ends in 21d.15:02:32

The discount code is valid when purchasing from 10 €. Discounts do not stack.

Log in and for this item
you will receive 8,81 Book Euros!?
  • Author: Huan Liu
  • Publisher:
  • ISBN-10: 079238198X
  • ISBN-13: 9780792381983
  • Format: 16.3 x 24.3 x 1.9 cm, kieti viršeliai
  • Language: English English

As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com- puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys- tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ-ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates.

Reviews

  • No reviews
0 customers have rated this item.
5
0%
4
0%
3
0%
2
0%
1
0%
(will not be displayed)