Record ID | marc_columbia/Columbia-extract-20221130-026.mrc:126458253:9572 |
Source | marc_columbia |
Download Link | /show-records/marc_columbia/Columbia-extract-20221130-026.mrc:126458253:9572?format=raw |
LEADER: 09572cam a2200733Ii 4500
001 12950043
005 20220507225621.0
006 m eo d
007 cr cn||||m|||a
008 161108t20172017caua fob 000 0 eng d
035 $a(OCoLC)ocn962271420
035 $a(NNC)12950043
040 $aCaBNVSL$beng$erda$epn$cJ2I$dJ2I$dWAU$dEBLCP$dYDX$dOCLCA$dOCLCQ$dUIU$dIDB$dBTCTA$dVT2$dOTZ$dCSAIL$dMERER$dOCLCQ$dN$T$dZCU$dRRP$dYDX$dBUF$dCAUOI$dJNA$dOCLCF$dCEF$dU3W$dINT$dESU$dOCLCQ$dLVT$dCOO$dOCLCQ$dOL$$dOCLCQ$dOCLCO
019 $a958782241$a962451306$a966556658$a967682021
020 $a9781627052979$q(electronic bk.)
020 $a1627052976$q(electronic bk.)
020 $z9781627054935$q(print)
024 7 $a10.2200/S00735ED1V01Y201609SPT018$2doi
035 $a(OCoLC)962271420$z(OCoLC)958782241$z(OCoLC)962451306$z(OCoLC)966556658$z(OCoLC)967682021
050 4 $aBF637.P74$bL53 2017
072 7 $aFAM$x046000$2bisacsh
072 7 $aPSY$x039000$2bisacsh
072 7 $aPSY$x044000$2bisacsh
072 7 $aPSY$x000000$2bisacsh
082 04 $a155.92$223
049 $aZCUA
100 1 $aLi, Ninghui$c(Computer scientist),$eauthor.
245 10 $aDifferential privacy :$bfrom theory to practice /$cNinghui Li, Min Lyu, Dong Su, Weining Yang.
264 1 $a[San Rafael, California] :$bMorgan & Claypool Publishers,$c[2017]
264 4 $c©2017
300 $a1 online resource (xiii, 124 pages) :$billustrations
336 $atext$btxt$2rdacontent
337 $acomputer$bc$2rdamedia
338 $aonline resource$bcr$2rdacarrier
490 1 $aSynthesis lectures on information security, privacy, & trust,$x1945-9750 ;$v#18
588 0 $aOnline resource; title from PDF title page (Morgan & Claypool, viewed on November 8, 2016).
504 $aIncludes bibliographical references (pages 113-121).
505 0 $a1. Introduction -- 1.1 Privacy violation incidents -- 1.1.1 Privacy incidents -- 1.1.2 Lessons from privacy incidents -- 1.2 On balancing theory and practice -- 1.3 Organization of this book -- 1.4 Topics for volume 2.
505 8 $a2. A primer on [epsilon]-differential privacy -- 2.1 The definition of [epsilon]-DP -- 2.1.1 Bounded DP or unbounded DP -- 2.2 Properties of [epsilon]-DP -- 2.2.1 Post-processing and sequential composition -- 2.2.2 Parallel composition and convexity -- 2.3 The Laplace mechanism -- 2.3.1 The scalar case -- 2.3.2 The vector case -- 2.4 The exponential mechanism -- 2.4.1 The general case of the exponential mechanism -- 2.4.2 The monotonic case of the exponential mechanism -- 2.4.3 Case study: computing mode and median -- 2.4.4 Discussion on the exponential mechanism -- 2.5 Case study: computing average -- 2.5.1 Applying the Laplace and the exponential mechanism -- 2.5.2 Applying the Laplace mechanism and composition -- 2.5.3 A non-private average algorithm using accurate count -- 2.5.4 NoisyAverage with accurate count -- 2.5.5 NoisyAverage with normalization -- 2.5.6 Which is best -- 2.6 Settings to apply DP -- 2.7 Bibliographical notes.
505 8 $a3. What does DP mean? -- 3.1 Limitations of syntactic notions -- 3.2 Semantic guarantees of differential privacy -- 3.2.1 Infeasibility of achieving "privacy as secrecy" -- 3.2.2 Toward a "real-world-ideal-world" approach -- 3.2.3 DP as approximating the ideal world of "privacy as control" -- 3.2.4 A formulation of DP's semantic guarantee -- 3.2.5 The personal data principle -- 3.2.6 A case study in applying PDP -- 3.3 Examining DP and PDP -- 3.3.1 When the notion of neighboring datasets is defined incorrectly -- 3.3.2 When using DP in the local setting -- 3.3.3 What constitutes one individual's data -- 3.3.4 An individual's personal data or personal data under one individual's control -- 3.3.5 Group privacy as a potential legal Achilles' heel for DP -- 3.3.6 A moral challenge to private party benefiting from DP -- 3.4 Additional caveats when using DP -- 3.4.1 Using an [epsilon] that is too large -- 3.4.2 Applying a model to personal data -- 3.4.3 Privacy and discrimination -- 3.5 Bibliographical notes.
505 8 $a4. Publishing histograms for low-dimensional datasets -- 4.1 Problem definition -- 4.1.1 Three settings -- 4.1.2 Measuring utility -- 4.2 Dense pre-defined partitioning -- 4.2.1 The baseline: a simple histogram -- 4.2.2 The hierarchical method -- 4.2.3 Constrained inference -- 4.2.4 Effect of privacy budget allocation in hierarchical histograms -- 4.2.5 Wavelet transforms and other optimizations -- 4.2.6 Beyond one-dimensional datasets -- 4.3 Lacking suitable partitioning -- 4.3.1 The uniform grid method--UG -- 4.3.2 The adaptive grids approach--AG, 2D case -- 4.3.3 Bottom-up grouping -- 4.3.4 Recursive partitioning -- 4.4 Bibliographical notes.
505 8 $a5. Differentially private optimization -- 5.1 Example optimization problems -- 5.1.1 k-means clustering -- 5.1.2 Linear regression -- 5.1.3 Logistic regression -- 5.1.4 SVM -- 5.2 Objective perturbation -- 5.2.1 Adding a noisy linear term to the optimization objective function -- 5.2.2 The functional mechanism -- 5.3 Make an existing algorithm private -- 5.3.1 DPLloyd: differentially private Lloyd algorithm for k-means clustering -- 5.3.2 DiffPID3: differential private ID3 algorithm for decision tree classification -- 5.4 Iterative local search via EM -- 5.4.1 PrivGene: differentially private model fitting using genetic algorithms -- 5.4.2 Iterative local search -- 5.4.3 Enhanced exponential mechanism -- 5.5 Histograms optimized for optimization -- 5.5.1 Uniform grid and its extensions -- 5.5.2 Histogram publishing for estimating M-estimators -- 5.5.3 DiffGen: differentially private anonymization based on generalization -- 5.5.4 PrivPfC: differentially private data publication for classification -- 5.6 Bibliographical notes.
505 8 $a6. Publishing marginals -- 6.1 Problem definition -- 6.2 Methods that don't fit the problem -- 6.2.1 The flat method -- 6.2.2 The direct method -- 6.2.3 Adding noise in the Fourier domain -- 6.2.4 Data cubes -- 6.2.5 Multiplicative weights mechanism -- 6.2.6 Learning based approaches -- 6.3 The PriView approach -- 6.3.1 Summary of the PriView approach -- 6.3.2 Computing k-way marginals -- 6.3.3 Consistency between noisy views -- 6.3.4 Choosing a set of views -- 6.3.5 Space and time complexity -- 6.4 Bibliographical notes.
505 8 $a7. The sparse vector technique -- 7.1 Introduction -- 7.2 Variants of SVT -- 7.2.1 Privacy proof for proposed SVT -- 7.2.2 Privacy properties of other variants -- 7.2.3 Error in privacy analysis of GPTT -- 7.2.4 Other variants -- 7.3 Optimizing SVT -- 7.3.1 A generalized SVT algorithm -- 7.3.2 Optimizing privacy budget allocation -- 7.3.3 SVT for monotonic queries -- 7.4 SVT vs. EM -- 7.4.1 Evaluation -- 7.5 Bibliographical notes -- Bibliography -- Authors' biographies.
520 3 $aOver the last decade, differential privacy (DP) has emerged as the de facto standard privacy notion for research in privacy-preserving data analysis and publishing. The DP notion offers strong privacy guarantee and has been applied to many data analysis tasks. This Synthesis Lecture is the first of two volumes on differential privacy. This lecture differs from the existing books and surveys on differential privacy in that we take an approach balancing theory and practice. We focus on empirical accuracy performances of algorithms rather than asymptotic accuracy guarantees. At the same time, we try to explain why these algorithms have those empirical accuracy performances. We also take a balanced approach regarding the semantic meanings of differential privacy, explaining both its strong guarantees and its limitations. We start by inspecting the definition and basic properties of DP, and the main primitives for achieving DP. Then, we give a detailed discussion on the semantic privacy guarantee provided by DP and the caveats when applying DP. Next, we review the state of the art mechanisms for publishing histograms for low-dimensional datasets, mechanisms for conducting machine learning tasks such as classification, regression, and clustering, and mechanisms for publishing information to answer marginal queries for high-dimensional datasets. Finally, we explain the sparse vector technique, including the many errors that have been made in the literature using it. The planned Volume 2 will cover usage of DP in other settings, including high-dimensional datasets, graph datasets, local setting, location privacy, and so on. We will also discuss various relaxations of DP.
650 0 $aPrivacy$xMathematical models.
650 0 $aData protection$xMathematics.
650 6 $aVie privée$xModèles mathématiques.
650 6 $aProtection de l'information (Informatique)$xMathématiques.
650 7 $aFAMILY & RELATIONSHIPS$xLife Stages$xGeneral.$2bisacsh
650 7 $aPSYCHOLOGY$xDevelopmental$xGeneral.$2bisacsh
650 7 $aPSYCHOLOGY$xDevelopmental$xLifespan Development.$2bisacsh
650 7 $aPSYCHOLOGY$xGeneral.$2bisacsh
650 7 $aData protection$xMathematics.$2fast$0(OCoLC)fst00887969
653 $aprivacy
653 $aanonymization
655 4 $aElectronic books.
700 1 $aLyu, Min,$eauthor.
700 1 $aSu, Dong$c(Computer scientist),$eauthor.
700 1 $aYang, Weining$c(Computer scientist),$eauthor.
776 08 $iPrint version:$z9781627054935
830 0 $aSynthesis lectures on information security, privacy and trust ;$v#18.$x1945-9742
856 40 $uhttp://www.columbia.edu/cgi-bin/cul/resolve?clio12950043$zAll EBSCO eBooks
852 8 $blweb$hEBOOKS